Comment: The IEEE crowd is skeptical of AI’s more optimistic claims, which turns out to be exactly what we need to push them forward.
According to a recent McKinsey survey, most companies of all sizes are actively adopting AI. Live! Areas experiencing the greatest drive from AI adoption include optimization of service operations, AI-based product enhancement, and contact center automation. Again, hooray! When the The general US population is asked about AI, the majority have a positive opinion about the potential of AI. Hooray everywhere.
But if you ask the most engineering-focused, IEEE spectrum crowd, the AI has a long, long way to go before they’re willing to stand up and clap their hands. IEEE spectrum “Members are involved with difficult-to-penetrate vendor decision teams, usually in a managerial capacity,” according to the 2020 media kit. In other words, this is a high-level, highly technical bunch who aren’t overly impressed by the whistleblower pieces about the wonders of AI (although they may well believe AI has a bright future). No, when the IEEE spectrum the editors looked back In the 10 most popular articles of 2021, a clear trend emerged: “What’s wrong with machine learning today?”
SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)
All aboard the AI train
No one needs to be reminded that we are still in the AI hype phase. What tweeted by Michael McDonough, global director of economic research and chief economist at Bloomberg Intelligence, public mentions of artificial intelligence in earnings calls have skyrocketed since mid-2014:
This trend hasn’t slowed either since McDonough tweeted that in 2017. If anything, it has increased.
Yet even as C-level executives continue to find it advantageous to exaggerate how AI is affecting their businesses, the people charged with making AI work have been less optimistic. As revealed in Anaconda 2021 State of Data Science Report, the biggest concern data scientists have with AI today is the possibility, even the likelihood, of bias in the algorithms. There also continues to be a significant shortage of staff capable of helping organizations maximize the value they derive from data. And even when companies have the right talent on staff, getting value from investments in artificial intelligence can still be elusive.
. No wonder, then, that some suggest “The promise of true general artificial intelligence … remains elusive. Artificial stupidity reigns supreme.” (Disclosure: My IP law professor brother Clark Asay wrote that and, yeah, I kind of like it.)
So AI has a long way to go. We knew this, right? But what are the specific concerns of technicians closest to AI implementations?
SEE: The Ethical Challenges of AI: A Guide for Leaders (Free PDF) (TechRepublic)
What can go wrong?
The most popular item it is very practical in its approach: money. Or, rather, the diminishing returns associated with paying for AI enhancement. The tl; dr? The computational and energy costs required to train deep learning systems can be higher than the benefits derived from them. Much higher. Here’s the quote for the money: “To cut the error rate in half, you can expect to need more than 500 times the computational resources.” And the longer version: “The good news is that deep learning provides enormous flexibility. The bad news is that this flexibility has a huge computational cost.”
It seems wrong. It is bad.
Of the other 10 most popular AI-related articles on IEEE spectrum For the year, three were positive (about, for example, how Instacart uses AI to power its business), one was neutral (a series of charts offering a view of the current state of AI), and five more were negative. :
On the uncertain future of AI (“Today, even as AI is revolutionizing industries and threatening to change the global job market, many experts wonder if today’s AI is reaching its limits”).
Renowned machine learning pioneer Andrew Ng on the difference between test and production (“Those of us in machine learning are really good at getting it right in a test suite, but unfortunately implementing a system requires more than getting it right in a test suite. proof”).
An article on the exciting potential and “deeply disturbing” reality of GPT-3, detailing “the potential danger companies face when working with this new and largely untamed technology, and when implementing business-driven products and services. by GPT-3 “. “
An interview with Jeff Hawkins, inventor of the Palm Pilot, on why “AI needs a lot more neuroscience” to be useful.
Sort of a list, capturing seven ways AI fails (“Neural networks can be disastrously fragile, forgetful, and shockingly bad at math”).
If anything, these curmudgeonly views on the realities of AI should make us feel hopeful, not downhearted. If you read the articles, there is a strong belief in the promise of AI, tempered by an understanding of the limitations that must be overcome. This is precisely what we should want, rather than an overly optimistic stance that overlooks these obstacles. The fact that these articles were more popular with those most likely to implement AI within the company is a sign of a rational approach to AI, rather than irrational exuberance.
Disclosure: I work for MongoDB, but the opinions expressed in this document are my own.