Putting AI to Work: Industry Use Cases for AI
Artificial intelligence (AI) is the foundation of a significant number of trends in this year’s almanac. While we are seeing AI being evaluated and applied in a wider set of use cases across a wider set of industry sectors, there are still a number of challenges to address before companies can realize a full return on investments from AI.
Some of these challenges are technical, but businesses are also facing challenges associated with how best to implement AI, how to educate people about the best use cases for AI, and how to foster higher confidence levels among non-technical stakeholders. If these challenges can be met in 2018, we will make significant steps toward exploiting the full potential of AI.
In 2017 we saw companies focus on the most obvious use cases for AI. Natural language understanding has come a long way from the faux pas-ridden days of IBM’s Watson learning the Urban Dictionary and Microsoft’s hacked Tay bot. As experience and confidence increase, we’ll see natural language voice user interfaces being implemented more widely for customers and within enterprises, where they will enable the completion of operational and support tasks.
Increased confidence will also expand the scope of the responsibilities we are willing to give chatbots and virtual agents. For example, chatbots will be able to negotiate limited-scope deals with predefined thresholds with customers without human supervision. Combining AI with sentiment analysis could mean that companies optimize margin and customer lifetime value by assuring that short-term profit never taints long-term net promoter scores (NPS). Consumers will counter this tactic by employing negotiation bots themselves, assuring that they get better deals. We’ll be employing the technology previously used for military war games to get $1 off a bottle of soda.
This year we will also see industries that have traditionally been avid users of quantitative structured data analysis making wider forays into using sophisticated unstructured data analyses and natural language processing (NLP). We’ll see these companies enter phases where they’ll run traditional and AI models against processes in operations, inventory management, and procurement functions.
Regulators and industry watchdogs will also start to consider neural networks and other difficult-to-explain machine learning models for decision making, provided that decisions can be explained, replicated, and monitored.
Pushing the Envelope: AI Research and Development
Just as business users are getting comfortable with AI, new tech terms will start appearing on companies’ analytics roadmaps, which will have them heading straight to Wikipedia. Building upon deep reinforcement learning, which establishes a reward system for bots when they arrive at the “best” outcome for a complex task in a changing environment, researchers are now focusing on deep reinforcement fuzzing.
This essentially mutates the inputs that feed the reinforcement process to evaluate outputs. Feedback loops inform and refine the input process, helping the bot to learn and generate better outcomes over time.
Generative adversarial networks (GAN) will also appear on board papers. This is where one neural net, the generator network, generates candidate outputs, which are evaluated by a discriminatory neural net. Iterative cycles of generation and discrimination GANs have been used to create photorealistic images as well as creating 3D models from 2D photos.
As AI becomes more widely adopted, organizations will look to link different frameworks and technologies in order to remove human bottlenecks from processes. This will be no mean feat and will increase cost and effort. Although we talk about AI as if it were a single thing, we’re really talking about a host of different modelling approaches, frameworks, and trained models. The good news for businesses is that we will also see a significant increase in the availability of repositories of trained models to accomplish specific tasks. This will fast-track AI development projects and move us one step further from having to spend nights and weekends laboriously training bots.
One of the significant drivers for AI adoption in 2018 will be the IoT and edge computing. These related drivers will get companies thinking about distributed AI, which is able to evaluate enormous data sets using a large-scale distributed network of simple hardware. Think of it as many minds making light of a heavy problem.
Achtung! AI Problems
Before we can all go home and let AI tackle the difficult problems in the world, we still need to address a number of issues. Implementing AI successfully doesn’t involve traditional methods such as SDLC, Agile, or DevOps. AI needs hyper-parameter tuning. Developing deterministic software applications and developing probabilistic machine learning models are fundamentally different things. This mindset and skill-set shift will challenge executives, managers, developers, data scientists, testers, users, and analysts. It’s a whole new world.
Machine learning and AI models depend on historical data as well as real-time data. The challenge here is that many organizations either don’t have access to historical data or struggle to clean it or transform it into a structure that an algorithm can consume. This causes their AI projects to fall at the first hurdle.
Another key challenge for AI is its reputation as an arcane dark art, impenetrable to all but a cabal of wise data scientists (and certainly not understandable by budget holders). The first task for any AI professional in 2018 should be tackling jargon. Move away from a world of neural nets and random forests into a world of open collaboration with those funding projects and those who will make critical business decisions based on AI outputs. Greater transparency will lead to more confident investments and ultimately greater returns on investments.
As a final thought, we should always be realistic about the extent to which we rely on algorithms. It may be easy to be overly confident about the abilities of our AI models just because they have been trained against billions of data points. In an algorithm-reliant system, a single data error, an unexpected algorithm nuance, or issues with preprocessing will be multiplied many-fold. These could produce seemingly logical but inherently incorrect outputs that would be incredibly challenging to detect and debug. AI is helping us to live in a smart world, but we need to assure that it doesn’t become too smart for our own good.