The rapid application development market is expected to grow from $7.8 billion in 2018 to $46.2 billion by 2023, at a CAGR of 42.9%. Due to this market shift, the need for collaboration between professional data scientists and application developers is no longer necessary. Today, citizen developers can operate alone, using predefined machine learning (ML) models built with low code or no code to deliver polished, AI-enhanced solutions.
Does this mean predefined models ignore the important role of human input? Keep reading, as this article will help you understand how humans and machines can work together to make machine learning more effective with augmented ML, and most importantly, cover a few key technical debt concerns you need to address prior to jumpstarting your own AutoML (AI for AI).
A Quick Recap Of The AI Market
Evolving ROI-Driven AI Use Cases (Prediction + Automation + Optimization)
Here are some snapshot use cases of how AI is being profitably deployed across many vertical markets:
But where do you start when experimenting and implementing AI-driven projects? By following these three steps in your own environment, you can minimize typical augmented AI project implementation time and obtain better results.
1. In-Database Analytics Data Strategy
Build ML on top of existing data. Take AI to your data, thereby keeping your significant data troves intact and utilizing them “as is” and not compromising the legacy processes that already do a great job of collecting and storing data. Most companies have established on-premises, cost-effective ways to exploit existing data using products, such as Isilon OneFS storage, to create data lakes and executing hot (real-time, in-memory), warm (near real-time, NoSQL) and cold (batch, hadoop) datasets for both batch and real-time access in the AI following a lambda architecture. Implement data virtualization with automated data tiering that takes analytics to the data and not the other way around, thereby significantly reducing latency and performance issues.
2. Tech Stack Layers DNA
To bring some clarity to the complex AI ecosystem, divide the technology stack into layers across services, hardware, etc., as there is no standard definition in the industry:
With the above stack, you can jumpstart a hybrid cloud “bring your own” project with prebuilt blueprints.
3. The Need For Augmented ML
After you establish the above key foundation, then you can jump into AutoML. With this, the machine learning pipeline components leverage the data characteristics and metadata learning with built-in optimization and reinforcement of learning methods.
Augmented ML will play a significant role in future applications, but we have a long way to go for full AutoML automation capabilities due to the following:
Takeaway
Doing a full ML proof of concept is very easy; the challenge is integrating your current on-premises to evolving cloud-native analytics as a service with cloud bursting (i.e., horizontally scaling onsite for data lakes). Many cloud vendors talk about cloud bursting (without significantly moving on-premises workloads) with very minimal proven scenarios and push vendors to show pilots for specific on-prem flavors (e.g., Cloudera cloud bursting with Google Cloud).
Let’s challenge the status quo of most organizations, including Fortune 500s, which continue to use their data the same way they would have used it a decade ago. Let’s create better customer experiences with augmented intelligence. Establish end-to-end data governance by addressing “garbage in and garbage out” and use COTS that already exists in the market today.
This blog was originally published on Forbes.com. Link
Tags- AI AI as a service Application development Application Services Artificial Intelligence AutoML