As a project manager, you’ve probably engaged in a number IT projects throughout your career, spanning complex monolithic structures to SaaS web apps. However, with the advancement of artificial intelligence and machine learning, new projects with different requirements and problems are coming onto the horizon at a rapid speed.
With the rise of these technologies, it is becoming less of a “nice to have” and instead essential for technical project managers to have a healthy relationship with these concepts. According to Gartner, by 2020, AI will generate 2.3 million jobs, exceeding the 1.8 million that it will remove—generating $2.9 trillion in business value by 2021. Google’s CEO goes so far as to say that “AI is one of the most important things humanity is working on. It is more profound than […] electricity or fire.”
With applications of artificial intelligence already disrupting industries ranging from finance to healthcare, technical PMs who can grasp this opportunity must understand how AI project management is distinct and how they can best prepare for the changing landscape.
What It All Means: AI vs. ML
Before going deeper, it’s important to have a solid understanding of what AI really is. With many different terms often used interchangeably, let’s dive into the most common definitions first.
The progression of AI, machine learning, and deep learning
Artificial Intelligence (AI)
AI is a field of computer science dedicated to solving problems which otherwise require human intelligence—for example, pattern recognition, learning, and generalization.
This term has been overused in recent years to denote artificial general intelligence (AGI) which refers to self-aware computer programs, capable of real cognition. Nevertheless, most AI systems for the foreseeable future will be what computer scientists call “narrow AI,” meaning that they will be designed to perform one cognition task really well, rather than really “think” for themselves.
Machine Learning (ML)
Machine learning is a subset of artificial intelligence that uses statistical techniques to give computers the ability to “learn” from data without being explicitly programmed.
AI and ML have been used interchangeably by many companies in recent years due to the success of some machine learning methods in the field of AI. To be clear, machine learning denotes a program’s ability to learn, while artificial intelligence encompasses learning along with other functions.
To learn more about neural networks and deep learning, please refer to the appendix at the end of this article.
An Important Distinction: AI vs. Standard Algorithms
A key takeaway of AI is that its algorithms use a large amount of data to adjust their internal structure such that, when new data is presented, it gets categorized in accordance with the previous data given. We call this “learning” from the data rather than operating according to the categorization instructions written strictly in the code.
Imagine that we want to write a program which can tell cars apart from trucks. In the traditional programming approach, we would try and write a program which looks for specific, indicative features, like bigger wheels or a longer body. We would have to write code specifically defining how these features look and where they should be found in a photo. To write such a program and make it work reliably is very difficult, likely yielding both false positives and false negatives, to a point where it may not be usable in the end at all.
This is where AI algorithms become very useful. Once an AI algorithm is trained, we can show it many examples, and it adjusts its internal structure to start detecting features relevant to the successful classification of the pictures instead of relying on static, prescribed feature definitions.
AI Project Management in Practice
Data Is King
Humans are not great at dealing with large volumes of data, and the sheer volume of data available to us sometimes prevents us from using it directly. This is where AI systems come in.
A very core concept regarding AI systems is that their predictions are only as good as their data. For example, an algorithm with a million data points will outperform the same algorithm with 10,000 data points. Moreover, BCG reports that “many companies do not understand the importance of data and training to AI success. Frequently, better data is more crucial to building an intelligent system than better-naked algorithms are, much as nurture often outweighs nature in human beings.”
With this knowledge in tow, preparing and cleaning data is something that will become more prevalent in the project process. This step is often the most labor-intensive part of building an AI system, as most businesses do not have the data ready in the correct formats—thus, it may take a while for data analysts to complete this essential step.
Data preparation is a key step in AI project management.
Additionally, the data infrastructure setup and data cleaning jobs are much more linear than usual software development and might require a different project management methodology.
To summarize, it can take much longer to build the right data infrastructure and prepare the data to be used than building the machine learning model to run the data. This is a big consideration for project managers as they manage teams and think about AI scope and project estimates.
Moreover, the dataset should be continuously updated with the new data. Access to unique datasets might be the main deciding factor defining which ML product is most successful. It’s critical to stay up to date on this in order to reach the best possible performance for your ML project, even post-launch.
The AI Development Lifecycle
Most of you will be familiar with the standard systems development lifecycle (SDLC) along with how different methodologies and technologies are shaping it. It is important to note that AI development will bring a new set of challenges to the field. We can split the AI development lifecycle into these steps: ideation and data discovery, prioritizing MVPs, and developing MVPs into fully-fledged products.
Ideation and Data Discovery
At this first stage, the focus should be on two key things: the end-user of the ML product and which data pools are available.
By approaching the problem from two independent sides, these techniques can help a project manager quickly narrow down the ML product opportunities available within a company. During this phase, top PMs can benefit from their knowledge of the machine learning space in order to better understand the difficulty to which certain problems can be solved. Things move very fast in the field of ML, and some hard problems can be made much easier by new developments in research.
As previously mentioned, once the data is discovered, it needs to be cleaned and prepared. This specific task is normally done in linear steps, which do not fit neatly into typical project methodologies like Agile or Waterfall, although they can be forced into sprints. Typically, data cleaning is done iteratively by gradually increasing the size of datasets and preparing them in parallel to other development efforts.
Prioritizing the Minimum Viable Product (MVP)
The truth that it’s better to have a working prototype of a smaller product, rather than an unfinished large one, still stands here with machine learning products. New ML MVPs should be prioritized based on the speed of delivery and their value to the company. If you can deliver products, even those which may be smaller, with speed, it can be a good, quick win for the whole team—you should prioritize these products first.
Preparing these MVPs in classic Agile fashion is a good idea, and the development team should focus on delivering ML models based on the continually improving datasets prepared independently by the data team. An important distinction here is that the data team does not necessarily need to work via the same Sprint structure as the team building the MVP.
MVP to Full-fledged Product
This step is where data infrastructure becomes key. If your ML product requires high-frequency API access from around the globe, then you should now consider how you can scale the infrastructure up to support the ML product.
This is where changes to the ML modules have to be carefully evaluated in order to avoid breaking the performance of the current product. Retraining the ML modules with new algorithms or datasets does not always bring a linear performance increase—therefore, a substantial amount of testing is required prior to live deployment. ML module testing for edge cases and potential generative adversarial network (GAN) attacks are still in their infancy, but it is definitely something for project managers to keep in mind when running a live ML product.
Key Roles Within the AI Development Lifecycle
The data-heavy requirements of developing ML applications bring new roles into the SDLC of AI products. To be a great project manager in the field of ML applications, you must be very familiar with the following three roles: data scientists, data engineers, and infrastructure engineers. Although they are sometimes guised under other titles, including machine learning engineers, machine learning infrastructure engineers, or machine learning scientists, it’s important to have a solid understanding of these core positions and their impact on the ML development process.
Three key roles which technical PMs should be familiar with: data scientist, data engineer, and infrastructure engineer
Data scientists are the individuals who build the machine learning models. They synthesize ideas based on their deep understanding of applied statistics, machine learning, and analytics and then apply their insights to solve real business problems.
Data scientists are sometimes seen as advanced versions of data analysts. Data scientists, however, usually have strong programming skills, are comfortable processing large amounts of data that span across data centers, and have expertise in machine learning.
They are also expected to have a good understanding of data infrastructures and big data mining as well as be able to perform exploratory exercises on their own, looking at data and finding to find initial clues and insights within it.
Fundamental Skills: Python, R, Scala, Apache Spark, Hadoop, Machine Learning, Deep Learning, Statistics, Data Science, Jupyter, RStudio
Data engineers are software engineers who specialize in building software and infrastructure required for the ML products to operate. They tend to focus on the overarching architecture and, while they might not be experts in machine learning, analytics, or big data, they should have a good understanding of these topics in order to test their software and infrastructure. This is necessary to enable the machine learning models created by the data scientist to be successfully implemented and exposed to the real world.
Fundamental Skills: Python, Hadoop, MapReduce, Hive, Pig, Data Streaming, NoSQL, SQL, Programming, DashDB, MySQL, MongoDB, Cassandra
Infrastructure engineers take care of the ML products’ backbone: the infrastructure layer. While data engineers may build some of this infrastructure, it’s often built on top of the layer prepared and agreed upon by the infrastructure team.
Infrastructure engineers may work across multiple ML teams, with the goal of creating a scalable and efficient environment in which ML apps can scale to service millions of users. Infrastructure engineers not only take care of the software level of platforms but also coordinate with data center partners to ensure that everything is running smoothly, from geographical location of hosted data to hardware. With these aspects gaining in importance for ML projects, infrastructure engineers are becoming ever more important in AI-driven companies.
Fundamental Skills: Kubernetes, Mesos, EKS, GKE, Hadoop, Spark, HDFS, CEPH, AWS, Cloud Computing, Data Center Operations, End-to-end Computing Infrastructure, IT Infrastructure, Service Management
Common Challenges Today
With the emergence of AI and ML-based products, project managers are expected to run into both familiar and completely foreign challenges. Top PMs are acutely aware of these potential issues through the whole process, from scoping out projects through to completion.
Despite the popularity and promise of AI, there is a good chance that the problem you are trying to solve doesn’t require an elaborate AI solution.
A lot of prediction problems can be solved using simpler and, in some cases, more reliable statistical regression models. It is very important for a PM to do a sanity check before starting a project to ensure the problem truly requires machine learning.
It is sometimes wise to start with a simpler statistical model and move in parallel with a machine learning-based solution. For example, if you are building a recommendation engine, it could be wise to start with a simpler solution with a faster development lifecycle, providing a good baseline that the subsequent ML model should outperform.
AI Scope Creep
The most common causes of scope creep in ML projects are related to trying to do too many things at once and underestimating the effort needed to prepare the data.
To tackle the first problem, manage the stakeholders so they understand that it’s better to start with quick wins rather than grandiose plans. Communicate this approach continuously throughout the project, as you build and test.
Start with small, atomic features that can be easily defined and tested. If you find yourself with a complex task, try to break it down into simpler tasks that are good proxies for your main task. It should be easy to communicate what these tasks set out to accomplish.
For example, if you are trying to predict when a user will click on a specific ad, you can first try predicting whether the user dismisses the ad entirely. In this approach, the problem is simplified and can be better accommodated and predicted by the current ML models. Facebook has made a great series going deeper into this topic, focusing more on the ML pipeline from inception to delivery of the model.
To address the second contributor to scope creep, make sure that you are capable of preparing the data to support your ML projects. Simply assuming you have the data needed, in the format needed, is the most common mistake PMs make when just starting with ML projects. With data preparation and cleaning often being the more lengthy part of the ML project process, managing this step is essential. Make sure your data scientist has access to the right data and can check its quality and validity before coming up with ML features they wish to build.
Prepare to do data labeling and cleaning as a continuous exercise throughout the project, not just as an initiator, as the project can always benefit from better and more data. With this step not being the most captivating task, split this work into sprints so your data team can feel the progress of their efforts instead of facing an endless ticket backlog.
Sometimes, companies outsource data labeling to third parties. While this can help to save time and up-front costs, it can also produce unreliable data, ultimately hindering the success of your ML model. To avoid this, use the multiple overlap technique, where every piece of data is checked by multiple parties and only used if their results match.
When project planning, accommodate enough time for the data team to make adjustments in case your labeling requirements change mid-project and relabeling is required.
Finally, check if your data can be easily used with existing ML methods instead of inventing new ML methods, as starting from zero can drastically increase the project time and scope. Note that if you are trying to solve an ML problem that has not yet been solved, there is a good chance that you will fail. Despite machine learning’s success and the number of research papers published, solving ML problems can be a very difficult endeavor. It’s always easiest to start with an area of ML which has a lot of good examples and algorithms and try to improve upon it rather than trying to invent something new.
Machine Learning, Expectations, and UX
Every PM should be ready to think about the user experience of the AI products they’re creating and how to best manage the team that is building them. Google wrote a great piece about their way of thinking about UX and AI, with an emphasis on human interaction.
This point is especially important if your ML product has to interact with operators or even be replaced by them. The design should add the minimum necessary amount of stress to operators and users of the system. For example, chatbots are often based on machine learning, but they can seamlessly be taken over by a human operator.
There is also a possibility that stakeholders may expect much more from machine learning products than what they can deliver. This is usually a problem stemming from the hype created by the media when writing about AI products, and thus, it is important for the project manager to set reasonable expectations.
Be sure to explain what the AI tool really is and can achieve for your stakeholders so that you can manage their expectations well enough before they test the tool. Good UX is great, but it can’t deliver value to users with unrealistic expectations, so it is essential for any PM involved to manage these and educate their stakeholders about AI and its realistic capabilities.
Quality Assurance (QA) and Testing Practices in ML
AI in its current form is a relatively new field. Never before have there been so many applications using deep learning to achieve their goals. These new developments come with their own set of challenges, particularly in testing.
While it’s relatively easy to test standard software that has a clear “rule set” written by people, it is much more difficult to exhaustively test machine learning models, especially those built using neural networks. Currently, most ML models are tested by the data scientists themselves, yet there are few agreed-upon methods of testing with standard QA teams to ensure that ML products do not fail in unexpected ways.
With new ways to manipulate the results of the known models such as these GAN attacks, comprehensive model testing will become ever more important. This will become a priority for a lot of ML projects, and we will see more “integration” type tests for the ML models in years to come. For most simple projects, this may not currently be a tangible problem, but it’s important to keep this in mind if you are building a mission-critical ML product.
ML Model Theft and Plagiarism
Since this Wired article was published, and the original paper was presented at USENIX Security conference in 2016, it has become apparent that there exists the possibility to plagiarize a live ML model.
This is still pretty difficult to accomplish well, but if you have a model running via a publicly available API, it’s important to be aware of this possibility. In theory, a party with substantial access to it could train their own network based on yours and effectively copy your prediction capability.
This is still pretty limited in terms of possibility, but be sure to work with your team on a prevention strategy for possible attacks if this is a concern for your project.
With the current demand for world-class AI experts, the competition for getting the right talent is fierce. In fact, New York Times reports that world-class AI experts can make up to $1 million per year working for big Silicon Valley tech powerhouses. As a PM, while you look for AI experts to join your team, be aware of these dynamics as they may impact your hiring cycles, budget, or quality of work done.
This shortage extends past the innovative minds creating the newer deep learning algorithms and is also true for top quality data engineers and scientists.
Many of the most talented folks participate in machine learning competitions such as Kaggle where they can hope to win north of $100,000 for solving difficult machine learning problems. If it’s hard to hire ML experts locally, it is wise to look for outside-the-box solutions, like hiring specialized contractors remotely or running your own Kaggle competition for the most difficult ML problems.
Legal and Ethical Challenges
The legal and ethical challenges of AI in project management are twofold.
The first set of challenges stems from the data used to train the ML models. It is essential to understand where the data you use originates, and specifically whether you have the rights to utilize it and the licenses that allow you to use the data.
It’s always important to consult your lawyers to solve such questions before deploying a model trained on the data for which you may not have the right type of license. Since this is a relatively new field, many of these answers are not clear, but PMs should make sure that their teams are only using datasets which they have the rights to use.
Here is a good list of publicly available datasets for training your ML algorithms.
The second set of challenges comes from ensuring that your system doesn’t develop a systematic bias. There have been numerous cases of such problems in recent years, where one camera company had to admit that its smile recognition technology only detects people of a particular race because it was trained only on data containing faces from that race. Another example came from a large software company, which had to withdraw their self-learning Twitter bot after a few days of learning, as a concerted effort by a group of internet trolls had it producing racial slurs and repeating wild conspiracies.
The degree of these problems can be minor or project destroying, so when developing systems which are critical, PMs should make sure that they consider such possibilities and prevent them as early as possible.
Good Foundations Lead to Strong Structures
The progress of information management, leading to AI.
In summary, the impending AI revolution brings forth a set of interesting, dynamic projects that often come with a modified development process, a differing team archetype, and new challenges.
Top technical project managers have not only a good understanding of AI basics but also the intuition for the difficulty of each project step and what is truly possible to create with their team. Since AI is not a commercial off-the-shelf (COTS) solution, even companies which choose to purchase certain ML products will still have to invest in testing new things and managing their data and infrastructure correctly.
It’s clear that the types of software products and the processes for creating them are changing with the emergence of AI. Project managers who are able to grasp and execute on these new concepts will be instrumental players in creating the machine learning products of the future.
Additional Theory: DLs and NNs
In addition to the more common verbiage of artificial intelligence (AI) and machine learning (ML), project managers can benefit from being aware of further distinguishing deep learning (DL) and neural networks (NN).
Deep Learning (DL)
Deep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to classical task-specific algorithms.
Most modern deep learning models are based on an artificial neural network, although they can use various other methods.
Neural Networks (NN)
Neural networks are biologically inspired, connected mathematical structures which enable AI systems to “learn” from data presented to them.
We can imagine these networks as millions of small gates that open or close, depending on our data input. The success of these techniques was enabled by the growth of GPU computing power in recent years, allowing us to quickly adjust more of those “little gates” inside the neural networks.
A neural network diagram
There are multiple types of neural networks, each accompanied by its own specific use cases and level of complexity. You might see terms like CNN (convolutional neural network) or RNN (recurrent neural network) used to describe different types of neural network architecture.
To better understand how they look and function, here is a great 3D visualization of how neural networks “look” while they are active.
Interested in Learning More about AI?
If, after reading this, you would like to go and explore the subject a little bit deeper, I recommend checking out these sources:
Understanding Neural Networks
If you want to dive deeper into the mechanics of how neural networks work, I suggest that you check out 3Blue1Brown series on neural networks on YouTube. In my opinion, this is by far the best in-depth explanation of neural networks. It is delivered in simple terms and requires no prior knowledge.
Staying Up to Date with AI News
If you want to stay up to date with the newest advancements in AI technology without spending hours reading academic papers, I recommend the following Two Minute Papers. This channel provides weekly two-minute updates on the most impressive new AI techniques and their implementations.
Learning the Basics of ML Development
If you ever want to dabble in code yourself, and you have some rudimentary Python skills, then you can check out Fast.ai. Their course allows anyone with basic development skills to start experimenting and playing around with neural networks.
Foundations of Machine Learning
This suggestion is for those who want to start from the very beginning and work their way to the top of understanding and implementing machine learning.
Created by the now-legendary Andrew Ng, who launched Coursera with this course, it does require a substantial time investment—at least six months—but it can be an extremely productive way to spend a Saturday.
Note: Key term definitions have been adapted from Wikipedia.
Understanding the Basics
Is machine learning the same as artificial intelligence?
No—although they are often used interchangeably, machine learning is a subset of artificial intelligence that is specifically characterized by a program’s ability to learn without being explicitly programmed.
What is deep learning in simple terms?
Deep learning can be classified as a class of machine learning methods that are built on learning data representations. They are often based on an artificial neural network, although they can use various other methods.
What do you mean by neural network?
Neural networks are mathematical structures which enable AI systems to “learn” from the supplied data. We can imagine these networks as millions of small gates that open or close, depending on our data input.