Programs like StableDiffusion and ChatGPT have propelled generative AI into the public consciousness. Despite the inevitable hype, part of the appeal of these AI systems, which use vast data sets to generate content, is their immediate accessibility. Anyone, not just data scientists, can use them. Domain experts across many fields have enthusiastically demonstrated its wide-ranging applications in sales, marketing, web design, legal, IT, HR and more.
Accelerating digital with generative AI
There’s not an industry in the world that generative AI will not impact. The field of digital engineering is a case in point. AI in software engineering is not new. What sets apart large language models like ChatGPT is their sheer level of advanced autonomy in tasks such as code generation, bug detection, natural language processing, documentation, and testing.
By speeding up and automating software development, implementation, and infrastructure, organisations can accelerate their digital evolution and meet the demand for updates, improvements, and all-new digital products. These time savings and productivity gains come at a time when businesses are struggling with digital skills shortages.
Playing catch-up across other business areas
Digital engineers may understandably look on with wonder and dread at ChatGPT and its coding-specific alternatives. Even though we are still at an early stage in this AI learning curve, the data sets and models are advancing rapidly. Considering the technical leaps made with each version of new generation of the GPT (generative pre-trained transformer) model used to train ChatGPT, it’s tempting to speculate – what next?
However, just as organisations race to experiment with AI’s capabilities, they also need to prepare for its many ramifications. These will impact processes, policies, and people alike.
Trust, transparency and adoption
ML algorithms that can create new content—images, video, text or code—pose significant challenges around ethics, privacy, and ownership. Regulators are working hard to catch up with AI advances. Organisations have a golden opportunity to seize the initiative, demonstrating their AI models are fair and transparent.
Creating and operating a framework of policies that assure trust could be critical to success. The analyst firm Gartner advocates an approach it calls AI TRiSM - AI trust, risk, and security management. By 2026, it anticipates organisations that implement AI TRisM will realize a 50 per cent improvement in AI adoption, attainment towards business goals, and user acceptance.
Privacy and security are other major areas of concern. Not only is the issue of deepfakes one that legislators, governments and businesses have yet to fully address, but AI-driven security practices could potentially open up organisations to privacy breaches and raise complex questions around accountability.
Generative AI algorithms establish an understanding of and shape much of what happens in people’s lives by observing the habits displayed in everyday life, whether via cameras, Apple Watches, microphones, etc. However, generative AI technology is not ready to face legal and ethical challenges that may arise, and this area needs more research. Such issues include propagating misrepresented narratives through fake images and videos that hurt individuals and their rights, and mislead the public.
Finally, ownership and intellectual property are hotly contested challenges in the burgeoning world of AI. It’s essential to establish ownership and usage rights in any agreement involving generative AI. In general, if a business owns the generative AI model, its content would also belong to the business. However, if the data used to train the model was owned by someone else, or was copyrighted, the situation becomes more complex.
A Multi-disciplinary approach to AI innovation
Generative AI models will transform every sector and every job role – digital engineering being a prime example. Organisations have responded with excitement and urgency as it becomes clear that these types of AI models have the potential to significantly reduce costs, speed up production, and confer competitive advantage.
Now is indeed an ideal time for organisations to experiment with generative AI applications. At a technical level, many businesses will find there is groundwork to be done before their digital infrastructure can truly take advantage of what advanced AI models offer. However, fully realising its potential requires organisations to address issues such as transparency and user acceptance.
Their strategy should therefore be three-pronged. First, mastering the technical aspects of AI model maintenance and integration. Second, developing a robust governance framework of tools, policies and protocols governing the ethical, legal, and trust aspects. And third, implementing an employee engagement strategy that tackles user adoption and acceptance.
Dr. Atif Farid Mohammad – head of AI R&D Centre of Excellence, Apexon
Promoted content: Does social media work for engineers – and how can you make it work for you?
So in addition to doing their own job, engineers are expected to do the marketing department´s work for them as well? Sorry, wait a minute, I know the...