“The promise of artificial intelligence and computer science generally vastly outweighs the impact it could have on some jobs in the same way that, while the invention of the airplane negatively affected the railroad industry, it opened a much wider door to human progress.” Paul Allen, Co-Founder of Microsoft
Artificial Intelligence has existed for some time now either practically or as an idea within computing and information technology. Only recently have we seen scale-out applications of this technology alongside the uptake of cloud computing within start-up and large early adopter organisations. Large financial institutions have been using what is known as machine learning, or a variant of it to detect credit card fraud and to perform algorithmic modelling with actuarial science or other methods for years now, or have at least been experimenting with it.
With the ease of access to cloud computing platforms, AI/ML is now available to all businesses depending on the need or the use case. Something which I have seen across the industry is a direct requirement to run modelling on heavily transactional datasets or streaming datasets before it reaches a data lake or storage pool. This is now easy to do with tools like Apache Kafka and Hadoop, more recently Databricks. The traditional method to do this would be running SQL on-premises and injecting Python or R above the database layer to run in-time analytics before data hits a data warehouse.
Machine learning has come a long way and it is easy to get AI/ML mixed when looking at potential applications of the technology with cloud computing. One way of splitting the two out is to define what the path is to reach an output of the technology. With machine learning its quite typical to take an off-the-shelf model and adapt it or build a model for processing data with a defined output.
Artificial intelligence in the cloud typically is using an API or a call to a processing engine from within an application to receive an output. Within Microsoft Azure, the core AI tools are known as Cognitive Services, a set of APIs which can receive data to give a data-based output. The key services being:
-Speech to text
There are more solutions within the Cognitive Services directory. Those are the key ones I have encountered.
AI within cloud computing is incredibly powerful as it makes use of pre-existing ML models to give an output, this makes the services incredibly quick to provide an answer to a query or process data. A key example of this is the live text and audio translation in Skype or Microsoft Teams.
When to make use of AI across new applications can depend on the desired output and experience to a user. For example, using facial detection to ensure a picture is acceptable for an access card, or a profile picture, taking into account facial features or accessories, or basing the comparison against a set or number of official photos (this is something Uber are doing with Azure.
It is now possible to run some of these services within containers, allowing for AI on the edge where connections to services aren’t possible, or in air-gapped environments.
AI/ML is making a large impact within larger deployments or moves into the Internet of Things and IoT on edge, where IoT data streaming and capture is happening directly with manufacturing devices into edge nodes for processing then being sent to cloud for deep learning and analytics. One use case for edge devices is image processing and object classification. For example, images taken by cameras of products on an assembly line in a factory may be analysed for manufacturing defects without having to send the images to the cloud.
I’ll provide a wider article on this at some point.