Code faster with generative AI, but beware the risks when you do


colorcubegettyimages-1495327114

Yaroslav Kushta/Getty Images

Nowadays, developers can turn to generative artificial intelligence (GenAI) to code faster and more efficiently. Nevertheless, they should do so with caution and no less attention than before.

While the use of AI in software development may not be new — it’s been around since at least 2019 — GenAI brings significant improvements in the generation of natural language, images, and — more recently — videos and other assets, including code, Diego Lo Giudice, Forrester’s vice president and principal analyst, told ZDNET.

Also: Why the future must be BYO AI: Model lock-in deters users and stifles innovation

Previous iterations of AI were used mostly in code testing, with machine learning leveraged to optimize models for testing strategies, Lo Giudice told ZDNET. GenAI goes beyond these use cases, offering access to an expert peer programmer or specialist (such as a tester or a business analyst) who can be queried interactively to find information quickly. GenAI can also suggest solutions and test cases.

“For the first time, we are seeing significant productivity gains that traditional AI and other technologies have not provided us with,” he said. 

Developers can tap AI across the entire software development lifecycle, with a dedicated “TuringBot” at each stage to enhance tech stacks and platforms, Lo Giudice noted.

Forrester coined TuringBots to describe AI-powered tools that help developers build, test, and deploy code. The research firm believes TuringBots will drive a new generation of software development, assisting at every stage of the development lifecycle, including looking up technical documentation and auto-completing code.

“Analyze/plan TuringBots,” for instance, can facilitate the analysis and planning phase of software development, Lo Giudice said, pointing to OpenAI’s ChatGPT and Atlassian Intelligence as examples of such AI products. Others, such as Google Cloud’s Gemini Advanced, can generate designs of microservices and APIs with their code implementation, while Microsoft Sketch2Code can generate working code from hand-written sketched UI, he said.

Also: Implementing AI into software engineering? Here’s everything you need to know

Lo Giudice added that “coder TuringBots” are currently the most popular use case for GenAI in software development, generating code from prompts as well as from code context and comments via autocompletion for popular integrated development environments (IDEs). These include common languages such as JavaScript, C++, Python, and Rust.

A big draw of generative models is that they can write code in many languages, allowing developers to input a prompt to generate, refactor, or debug lines of code, Michael Bachman, Boomi’s head of architecture and AI strategy, said. “Essentially all humans interacting with GenAI are quasi and senior developers,” he said. 

The software vendor integrates GenAI into some of its products, including Boomi AI, which translates natural language requests into action. Developers can use Boomi AI to design integration processes, APIs, and data models to connect applications, data, and processes, according to Boomi.

The company uses GenAI to support its own software developers, who keep a close watch on the code that runs its platform.

Also: Can AI be a team player in collaborative software development?

“And that is the key,” Bachman said. “If you are using GenAI as the primary source for building your whole application, you are probably going to be disappointed. Good developers use GenAI as a jumping-off point or to test failure scenarios thoroughly, before putting code into production. This is how we deal with that internally.”

His team also works to build features to meet their customers’ “practical AI objectives.” For example, Boomi is creating a retrieval system because many of its clients want to replace keyword searches with the ability to look up content, such as catalogs on their websites, in natural language.

Developers can also use GenAI to remediate security, Lo Giudice said, looking for vulnerabilities in AI-generated code and offering suggestions to help developers fix certain vulnerabilities. 

Compared to traditional coding, a no- or low-code development strategy can offer speed, built-in quality, and adaptability, Forrester principal analyst John Bratincevic said. 

Also: Beyond programming: AI spawns a new generation of job roles

It also provides for an integrated software development lifecycle toolchain and access to an expanded talent pool that includes non-coders and “citizen developers” outside the IT community, Bratincevic said. 

Organizations may face challenges, however, related to the governance of large-scale implementation, especially with managing citizen developers who can number in the thousands, he cautioned. Pricing can also pose a barrier, as it is typically based on the number of end users, he said.

While GenAI or AI-infused software assistants can enable junior professionals to fill talent gaps, including in cybersecurity, Lo Giudice said an expert eye review is still necessary for all these tasks. 

Bratincevic concurred, stressing the need for developers and other employees in the software development lifecycle to review everything the platform generates or auto-configures using AI. 

“We are not yet, and probably won’t ever be, at the point of trusting AI blindly for software development,” he said.

For one, there are security requirements to consider, according to Scott Shaw, Thoughtworks’ Asia-Pacific CTO. The tech consultancy regularly tests new tools to improve its efficiency, whether in the IDE or to support how developers work. The company does so where it is appropriate for its customers and only with their consent, Shaw told ZDNET, noting that some businesses are still nervous about using GenAI.

Also: Hurtling toward generative AI adoption? Why skepticism is your best protection

“Our experience is that [GenAI-powered] software coding tools aren’t as security-aware and [attuned with] security coding practices,” he said. For instance, developers who work for organizations in a regulated or data-sensitive environment may have to adhere to additional security practices and controls as part of their software delivery processes.

Using a coding assistant can double productivity, but developers need to ask if they can adequately test the code and fulfill the quality requirements along the pipeline, he noted.

It’s a double-edged sword: Organizations must look at how GenAI can augment their coding practices so the products they develop are more secure, and — at the same time — how the AI brings added security risks with new attack vectors and vulnerabilities.

Because it delivers significant scale, GenAI amplifies everything an organization does, including the associated risks, Shaw noted. A lot more code can be generated with it, which also means the number of potential risks increases exponentially.

Know your AI models

While low-code platforms may be a good foundation for GenAI Turingbots to aid software development, Bratincevic noted that organizations need to know which large language models (LLMs) are used and ensure they align with their corporate policies.

He said GenAI players “vary wildly” in this respect, and urged businesses to check the version and licensing agreement if they use public LLMs such as OpenAI’s ChatGPT.

Also: Yikes! Microsoft Copilot failed every single one of my coding tests

He added that GenAI-powered features for generating code or component configurations from natural language have yet to mature. They may see increased adoption among citizen developers but are unlikely to impress professional developers.

Bratincevic said: “At the moment, a proven and well-integrated low-code platform plus GenAI is a more sensible approach than an unproven or lightweight platform that talks a good game on AI.”

While LLMs carry out the heavy lifting of code writing, humans still need to know what is required and provide the relevant context, expertise, and debugging to ensure the output is accurate, Bachman said.

Developers also need to be mindful of sharing proprietary data and intellectual property (IP), particularly with open-source tools, he said. They should avoid using private IP such as code and financial figures to ensure they are not training their GenAI models using another organization’s IP, or vice versa. “And if you choose to use an open-source LLM, make sure it is well-tested before putting it into production,” he added. 

Also: GitHub releases an AI-powered tool aiming for a ‘radically new way of building software’

“I would err on the side of being extremely circumspect about the models that GenAI tools are trained on. If you want those models to be valuable, you have to set up proper pipelines. If you do not do that, GenAI could cause a lot more problems,” he cautioned.

It is early days and the technology continues to evolve; its impact on how roles — including software developers — will change remains far from certain.

For example, AI-powered coding assistants may change how skills are valued. Shaw quipped: will developers be deemed better because they are more experienced or because they can remember all the coding sequences? 

For now, he believes the biggest potential is GenAI’s ability to summarize information, offering a good knowledge base for developers to better understand the business. They then can translate that knowledge into specific instructions, so systems can execute the tasks and build the products and features customers want. 





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *