Digital demands have increased globally and the requirement for more software has paved the way for code-free development to become a mainstay.
In 2021, Gartner predicted that low-code application platforms would remain the largest component of the low-code development tech market through 2022, increasing by 65 per cent from 2020 to reach $5.8bn. It’s cause for celebration quite frankly: low- and no-code environments are already helping to transform enterprises while also expanding the scope of development into the remit of non-technical staff. Day to day, it means that more people can access and innovate with data.
However, while the immediate business benefits of low- and no-code development are clear and we often see evidence of the success stories involving analysts and other business experts, there is little that covers how to systemise the low- and no-code platform and approach — or indeed the long-term opportunities for businesses that can do it right.
Systemising a low- and no-code platform and approach
It’s safe to say that low- and no-code didn’t cross the chasm immediately. Until recently, many technology leaders felt forced to weigh up and select between the benefits of buying off-the-shelf AI solutions and a pure build approach. The former of course is often perceived to come with the risk of losing competitive edge and IP assets and, the latter, the risk of human error and poor execution.
It meant that many companies felt they needed to choose between empowering business analysts and empowering data scientists. Many also felt paralysed by the need to build compliance and processes around AI deployment.
MORE FROM ARTIFICIAL INTELLIGENCE
However, over the past few years it has become clear that as organisations scale their use of advanced analytics and create more projects with AI, the development and deployment of AI must include more people from different parts of the organisation, including business analysts, novice data scientists, IT operations, and business users.
Capitalising on the everyday AI opportunities
What do first steps look like for businesses looking to capitalise on the potential of low- and no-code? Usually, businesses will have to start by getting their data out of silos and into one central and unified analytics-ready environment and by selecting the most appropriate internal business use cases.
From here, companies can work with low- and no-code approaches and deploy central, global, and company-wide solutions as they design and manage their AI initiatives. Practically speaking, this means they are systemising the use of data and AI and executing faster. Often, this involves using prebuilt components and automation wherever possible to streamline work and processes.
The most successful organisations will identify data science resource-heavy tasks and map these to where a low- and no-code approach can help. This may extend to smart data ingestion, clearing complex text fields, processing dates and times, combining datasets, and even creating new machine learning models — all of these are tasks that can be accomplished with low- and no code solutions. Many business users have even ventured into code-free data pipelining, data preparation, and model training in order to scale out models in production using these new tools.
Other examples of what business users and novice data scientists can do with low- and no code tools may include applying model assertions to capture and test known use cases and what-if analysis to interactively test model sensitivity.
An opportunity to scale machine learning and AI for data scientists
Empowering business and other non-data science roles with low- and no-code visual tools is only one part of the opportunity: your data scientists may also be able to leverage low- and no-code solutions to operationalise more models quickly versus a code-only approach.
Model maintenance is an area where these tools can help data scientists significantly. With visual and collaborative interfaces for data pipelining, data preparation, model training, and MLOps, data scientists can easily scale out their models in production without failure or interruptions in a transparent and traceable way. With visibility into model performance metrics and a clear separation of design and production environments, data scientists are able to monitor key performance indicators easily and automatically.
Low- and no-code tools also reduce the duplicative work of data scientists, especially where code silos are common and where work may be redone simply because data scientists are unaware if it had already been done previously. Centralised, low- and no code environments can act as repositories and catalogues for code to provide data scientists with visibility into work done across an organisation, saving time and opening up resources to experiment and innovate.
Gregory Herbert, senior vice president and general manager, Dataiku
Onshore wind and grid queue targeted in 2030 energy plan
I can see that (unless the nuclear power vendors decide to develop/build cheap nuclear reactors - or the government decides to do something useful for...