People keep finding novel uses for generative artificial intelligence, the latest being that it can learn to design specialized hardware to make itself work faster. Generative AI applications such as large language models became mainstream when ChatGPT went viral in 2022, but they require copious, complicated hardware underneath their user-friendly skins, especially when asked to act on more than just interactive text.
“Specialized [hardware] accelerators are crucial for maximizing the potential of AI tools, but current design tools’ complexity and required hardware expertise hinder innovation,” explained Arnob Ghosh, assistant professor in New Jersey Institute of Technology’s Electrical and Computer Engineering department.
Ghosh, along with colleagues Shaahin Angizi and Abdallah Khreishah, had the meta-idea to tweak a large language model as their assistant. They thought of training it to learn the context of what’s needed when designing hardware acceleration, based on a user’s needs for accuracy, energy usage and speed.
“We are trying to provide the optimal context so that an LLM can generate the desired results. This is based on the idea that the LLM can indeed demonstrate in-context learning,” Ghosh said. “The challenging question is how can we do prompt optimization here. Some basic instructions might not work. The prompt must consist of some of the elements of the codes themselves so that we can provide the optimal context to the LLM.” To read the full story.