Artificial intelligence (AI) is the new gold rush, and many companies claim their stakes in it. The number of software products labeled as “AI-powered” is on the rise. Yet, in many cases, these products are driven by simple statistics or hidden people.
In 2016, artificial intelligence wasn’t even in the top 100 Google searches and thus not on the radar of many investors or corporate buyers. But for the last three years, its importance grew, and every software company became — at least on the surface — an AI company.
Organizations are too overwhelmed with what’s happening in and around tech space. They’re willing to take stakes in AI-powered businesses, but often lack the skills in evaluating AI startups or software products.
This trend is a threat not only to the investors willing to pour money into AI technologies. It’s a public trust issue as well since many governments redirect massive budgets towards the advancements in the artificial intelligence field.
Gartner predicts that in 2020 AI will be in the top five investment priorities of many CIOs. Therefore, the knowledge about what they’re spending the money on is of the essence.
What is AI Washing?
AI washing is an unfortunate trend that has emerged over the last years. It’s not the first “washing” we’ve been through. Firstly, there was greenwashing, where companies exaggerated the environmental benefits of their products to boost their sales. Then, during the rise of cloud computing, cloud washing has stepped in. Software vendors took their legacy products and rebranded these as “pure cloud” by tweaking their infrastructure here and there. On the surface, it looked like a cloud product, but behind the scenes, it was a cloud and on-premise patchwork.
Nowadays, not surprisingly, it’s the artificial intelligence that becomes the victim of these questionable tactics. The false labeling of technology as “artificial intelligence” — the AI washing — is machine learning at its best. Often, it’s the same algorithms they’ve always used (simple statistics + people).
In all the cases, the word “wash” has been applied, like a thin layer of paint, by the marketing departments to freshen something up. But it’s not entirely the fault of these software companies. Others — governments, private equity firms, corporations — have encouraged this behavior by shifting their investment budgets and priorities towards AI. Capital-hungry startups only responded to this behavior.
Investors Favour AI-based Products
While the AI-washing threatens how the public thinks of this technology, there is no doubt AI is doing more than ever. For good or bad.
According to the PwC study, the most significant economic gains from AI will be in China (26 percent boost to their GDP in 2030) and North America (14.5 percent), which accounts for almost 70 percent of the global economic impact.
The ID Spending Guide projects spending on AI systems to grow by nearly 40 percent until 2022. The retail industry will invest the most (prediction for 2019 was $5.9 billion), targeting solutions such as automated customer service agents or product recommendations. Banking is the second largest industry with a $5.6 billion investment (2019), going toward AI-enabled solutions, including automated threat intelligence or fraud prevention.
With such market potential, investors are keen to pour money into AI businesses. From 2011 through 2018, investors allocated more than $50 billion to AI start-ups (OECD report based on Crunchbase).
China has seen the most significant increase. In 2015, barely 3 percent was invested in AI startups. Two years later, it was 36. That’s a ten-fold increase. AI is a priority for the Chinese government and other investors.
The European Union accounted for 8 percent of the global AI equity investment in 2017. However, the investment levels across the member states vary greatly. At the forefront is the UK, with 55 percent of the EU investments allocated (2011–2018), followed by Germany (14%) and France (13%).
Chinas and EU’s investments are remarkable. But still, 70–80 percent of global VC investments are made by the United States.
Together, the USA, China, and the EU account for 93 percent of investment between 2011 and 2018.
To Build an AI Solution, a Company Requires Four Ingredients
While screening the market for startups or software products, these are the main aspects you should be focusing on.
Algorithms lie at the core of AI technologies. Here, the academic community drives the most advances. Businesses, only in rare instances, have the capacity to develop such algorithms. Access to these algorithms is straightforward and mostly free.
Algorithms are tricky because some believe that the more complex, the better. But that’s not the case. You can get quite remarkable results with a linear regression model. The art and science lie in the balance between performance and cost.
As a buyer or an investor, you should be looking at whether the software is getting better and better over time. Does it learn from the new data? Are the predictions more accurate? These are the questions you have to ask. But be careful about vendors claiming a nearly 100 percent accuracy of their model. That could be a sign of overfitting. Such models work fine with test data but do not perform well in production, with new data.
Vendors who are deploying real artificial intelligence products will have a team of data scientists on staff (or they’ll use partners). The number of these highly specialized people is minimal and concentrated in only a few companies (Google, Facebook, Amazon, Microsoft). Assembling a top-notch data scientist team is not only tricky but costly too.
If you’re in a formal investment process or an RFP (Request for Proposal), spend some time inspecting the job profiles of the provider. Ask for their credentials. Their team should have a skill mix of domain expertise, mathematics, and computer science. These are rarely covered by only one person, so do not look for the unicorn.
Data is the fuel for an AI company. Having access to data is the most critical part nowadays. Some startups seek agreements with providers to secure access to their data before they even begin writing the first line of code.
The commercial success is determined not only by access to the data but also by the quality of it. In a typical data science project, collecting data accounts for 70–90 percent of the overall project time (including cleaning). When data sources are well managed and integrated, it can drop as low as 50 percent.
To validate this point, ask for any agreements the company has with other partners. And ask about the data processing pipeline (keywords here are: API, Hadoop, MapReduce).
With the rise of cloud computing, infrastructure became a low-entry barrier. But still, inexperienced companies can have a lousy infrastructure design that’s expensive to fix.
The AI infrastructure consists of three parts: the network, servers, and storage. Each must be equally powerful. The weakest link will impair the performance of the whole chain.
A GPU (graphical processing unit), for example, can accelerate deep learning by 100 times compared to a CPU (central processing unit). A flaw in the server design will cause delays in the process. It can also result in wasted money.
Another aspect to look at is the location of where the AI algorithms are executed. Putting all the operations solely in the cloud is not efficient. Some should be performed locally, on the “edge”. For example, a facial recognition software at the airport should conduct the analysis locally, as the time taken to send the information between the cloud and the edge could cause a delay.
Also, AI-powered software often involves the processing of sensitive data. That’s why the AI infrastructure must be secured from end to end with state-of-the-art technology.
What’s happening in the AI area is more enthusiastic than substantive. The hype that started around 2016 often creates unrealistic expectations. A threatening tend that may lead to another AI winter.
As Yann LeCun, the former N.Y.U. researcher (now Chief AI Scientist at Facebook)writes:
“AI [has] ‘died’ about four times in five decades because of hype: people made wild claims (often to impress potential investors or funding agencies) and could not deliver. Backlash ensued. It happened twice with neural nets already: once in the late ’60s and again in the mid-’90s.”
At times like these, I like to think of the fundamental truth about our brain: it’s still the most complicated organ in the universe, and we still have no idea how it works.