AI models that generate entirely new content create a world of opportunity for entrepreneurs. And engineers are learning to do more with less.
These are some takeaways from a panel discussion at Madrona Venture Group’s Smart Apps Summit in Seattle this week.
“Big data is no longer a priority, in my opinion,” said Stanford computer science professor Carlos Guestrin. “You can solve complex problems with little data.”
Instead of improving AI models by collecting more data, researchers are more focused on modifying their underlying blueprints, said Guestrin, co-founder of Seattle-based machine learning startup Turi, which was acquired by Apple in 2016.
And AI plans changed quickly, resulting in models like DALL-E and GPT-3 that can hallucinate images or text from initial prompts.
These new “basic” AI models are the basis for emerging startups that generate written content, interpret conversations, or evaluate visual data. They will enable a host of use cases, said Oren Etzioni, technical director of the Allen Institute for Artificial Intelligence (AI2). But they must also be tamed so that they are less biased and more reliable.
“A huge challenge with these models is that they’re hallucinating. They’re lying, they’re generating — they’re making things up,” said Etzioni, also a venture capital partner at Madrona.
Guestrin and Etzioni spoke during a fireside chat moderated by the UW computer science professor Luis Cezewho is also a Madrona partner and CEO of Seattle AI startup OctoML.
OctoML has been chosen for a new top 40 list of smart app startups assembled by Madrona in collaboration with other companies. The startups on the list have raised more than $16 billion since their inception, including $5 billion since the start of this year.
Read on for more highlights from the discussion.
New AI models are changing the way engineers work
Engineers are used to creating separate AI models with unique technology stacks for individual tasks, such as predicting airfares or medical outcomes — and they’re used to populating the models with massive training datasets . But now, using less data as input, engineers build basic models to create specific tools, Guestrin said.
“We’re totally changing, with big language patterns and core patterns, the way we think about app development, going beyond this idea of big data,” Guestrin said. He added that engineers use “small, task-specific and accustomed datasets to refine the prompts that lead to a vertical solution that you really care about.”
Etzioni added: “Now with the base models, I build a unique model and then I can tweak it. But a lot of the work is done in advance and done once.
AI has become “democratized”“
AI tools are becoming more accessible to engineers with less specialized skills, and the cost of building new tools is starting to come down. The general public also has more access through tools like DALL-E, Guestrin said.
“I’m impressed with how the big language models, the base models, have enabled others beyond developers to do amazing things with AI,” Guestrin said. “Big language models give us the opportunity to create new programming experiences, to bring AI applications to a wide range of people who never thought they could program an AI.”
Bias is always a problem
Bias has always obsessed AI models. And that remains a problem in new generative AI models.
As an example, Guestrin pointed to a story creation tool that created a different fairy tale outcome depending on the race of the prince. If the tool was asked to create a fairy tale about a white prince, it described him as handsome, and the princess fell in love with him. If asked to create a story with a black prince, the princess was shocked.
“I worry a lot about that,” Guestrin said of biases in AI models and their ability to in turn affect societal biases.
Etzioni said new technologies being developed will be more effective at removing bias.
Guestrin said engineers need to consider the issue at all stages of development. The most important focus for engineers should be how they evaluate their models and organize their datasets, he said.
“To think that bridging the gap between our AI and our values is just a bit of salt that we can sprinkle on the end, like post-processing, is a bit of a limited perspective,” Guestrin added.
The human contribution will be central improving models
Etzioni drew an analogy to Internet search engines, which in their early days often required users to search in different ways to get the answer they wanted. Google excelled at improving production after learning what people clicked on from billions of queries.
“As people query these engines, query them again, and produce things, the engines will improve to do what we want,” Etzioni said. “I strongly believe that we are going to have humans in the loop. But this is not a barrier to technology.
Nor can AI predict its own best use cases. “If you ask GPT-3 what is your best and highest use for building new startups, you’ll get garbage,” Etzioni said.
Improving reliability is a priority
“These models, while amazing, are fragile. They can fail catastrophically,” Ceze said.
Researchers should learn to better define their goals and consider how to systematically test and evaluate systems to make them safer, Guestrin said. He added that researchers should “bring more of that software engineering mindset.”
Learning to make AI models more reliable is a major research focus of Guestrin’s group at Stanford and of AI2.
“It’s going to be a long time before you have a GPT-3-based application to run a nuclear power plant. It’s just not that kind of technology,” Etzioni said. “That’s why I think the web search engine analogy is so profound. If we have a human in the loop and if we have fast iteration, we can use very unreliable technology in a very empowering way.