Last spring, artificial intelligence research institute OpenAI said it had made software so good at generating text–including fake news articles–that it was too dangerous to release. That line in the sand was soon erased when two recent master’s grads recreated the software and OpenAI released the original, saying awareness of the risks had grown and it hadn’t seen evidence of misuse.
Now the lab is back with a more powerful text generator and a new pitch: Pay us to put it to work in your business. Thursday, OpenAI launched a cloud service that a handful of companies are already using to improve search or provide feedback on answers to math problems. It’s a test of a new way of programming AI and the lab’s unusual business model.
OpenAI was founded as a nonprofit in 2015 by Elon Musk and other Silicon Valley notables to ensure that future superhuman AI was a benign force. The Tesla CEO parted ways with the lab in 2018, and last year it became a for-profit company and took a $1 billion investment from Microsoft. OpenAI’s leaders claim that only by commercializing its research for the benefit of investors can it raise the billions needed to keep pace on the frontiers of AI.
Thursday’s launch of OpenAI’s first commercial product completes the metamorphosis. A research institute created to compete with tech giants on superhuman AI is now challenging them in the more mundane arena of selling cloud services to businesses.
OpenAI’s service is built on a machine-learning technique that has made computers much better with language over the past two years. Machine-learning algorithms are directed to analyze vast collections of text scraped from the web to discover the statistical patterns in language use. The software can then be tuned to perform tasks like answering factual questions or summarizing documents.
Google has tapped the technology to improve how its search engine handles long queries, and Microsoft Office uses it to spot grammar glitches. OpenAI has focused on pushing the technique to greater scale and making software that generates text. Given a snatch of writing, it builds on it, unspooling sentences with similar statistical properties. The results can be uncannily smooth, if sometimes unmoored from reality.
Text generators like that can be fun–try one here–but haven’t previously seen much commercial use. OpenAI CEO Sam Altman says the latest generation is powerful and flexible enough for real work. “This is the first time we’ve got something we think is good enough to make into a product,” he says.
OpenAI’s new text generators are trained using a collection of almost a trillion words gathered from the web and digitized books, on a supercomputer with hundreds of thousands of processors the company paid Microsoft to build, effectively returning some of the company’s $1 billion investment to its source.
The service is more open-ended than most AI cloud services, which usually perform one task, such as translation or image tagging, and are controlled with specific commands. Programmers who want to tap OpenAI’s technology simply submit human-readable text and get newly generated text back.
That may sound limiting, but by crafting the right input it’s possible to steer the software to perform different tasks. The goal is to try and massage it to riff on the statistical language patterns from a particular part of the internet.
Submitting examples of paragraphs rewritten for elementary schoolers followed by an unsimplified passage prompts the service to rewrite it to be easier to read. The service can answer factual questions or function as a chatbot if supplied with example Q&A pairs or turns of dialog that might direct the software to draw on its experience of factual statements or conversations.
“The big mental shift is, it’s much more like talking to a human than formatting things for a machine,” says Greg Brockman, OpenAI’s chief technology officer. “You give it a few questions and answers and suddenly it’s in Q&A mode.”
Nick Frosst, a researcher working on language machine learning who previously worked at Google, says that novel way of working with AI could widen the pool of people experimenting with language technology. “It’s exciting that you can do that,” he says. “It’s how most people think AI should work.”
OpenAI is offering its service for free for two months and already has some users. Algolia, a startup that builds internal search engines for apps and websites, uses it to improve its understanding of complex search strings.
Others are using an additional service in which OpenAI “fine-tunes” a version of the software to a specific task with additional data. Math education site Art of Problem Solving uses that to suggest comments to send students on their submissions, speeding up the work of graders.
Despite that early interest, OpenAI’s leaders freely admit that it’s far from clear how widely useful this new model of AI programming can be.
One unknown is its reliability. “These models are somewhat unpredictable,” says Robert Dale, of consultants Language Technology Group. OpenAI’s software can recreate the patterns of text but doesn’t have a commonsense understanding of the world. Its versatility can be a liability as well as an asset. Occasional clangers are of little consequence for some uses, such as predictive text, but could be deal breakers in others, such as a customer support chatbot.
One certainty about OpenAI’s technology is that it can talk dirty and nasty. Its training on vast swaths of the internet makes the software well versed in unsavory language such as casual or aggressive racism, and it can be prompted to recreate them. The results can be reminiscent of how Twitter users prodded a notorious Microsoft bot called Tay to make racist comments.
When WIRED provided the service with two sentences from message board 4chan accusing Republicans of being “spineless” and not taking action on “Clinton, Pedos, Censorship or Riots,” OpenAI’s service escalated, riffing that “we are being beaten and raped … vast immigration started in the ’60s and never stopped.”
OpenAI says it will vet customers to prevent people from using the service for things like spam or harassment. Some customers have built filters to block the technology from producing toxic language, and OpenAI is working on safety features of its own.
Altman doesn’t expect OpenAI’s product to be lucrative right away but says it could develop into a significant revenue source in a few years as the lab makes improvements. Microsoft’s stake in the lab could help. OpenAI built its new service on Microsoft’s Azure cloud platform; it could see much wider use if Microsoft offered it as an AI service.
Altman accepted closer relations with Microsoft as a possibility but declined to elaborate. When WIRED prompted the lab’s new software to fill out the details on “OpenAI and Microsoft’s first joint commercial venture” it described a “game called Copilot that allows two people to play a racing game with one person controlling the gas pedal and the other the brakes.”