Ben Dickson is the founder of TechTalks. He writes regularly about business, technology and politics. Follow him on Twitter and Facebook Ben Dickson is the founder of TechTalks. He writes regularly about business, technology and politics. Follow him on Twitter and Facebook
This article is part of our series that explores the business of artificial intelligence.
Last week, Hugging Face announced a new product in collaboration with Microsoft called Hugging Face Endpoints on Azure, which allows users to set up and run thousands of machine learning models on Microsoft’s cloud platform.
Having started as a chatbot application, Hugging Face made its fame as a hub for transformer models, a type of deep learning architecture that has been behind many recent advances in artificial intelligence, including large language models like OpenAI GPT-3 and DeepMind’s protein-folding model AlphaFold.
Large tech companies like Google, Facebook, and Microsoft have been using transformer models for several years. But the past couple of years has seen a growing interest in transformers among smaller companies, including many that don’t have in-house machine learning talent.
This is a great opportunity for companies like Hugging Face, whose vision is to become the GitHub for machine learning. The company recently secured $100 million in Series C at a $2 billion valuation. The company wants to provide a broad range of machine learning services, including off-the-shelf transformer models.
However, creating a business around transformers presents challenges that favor large tech companies and put companies like Hugging Face at a disadvantage. Hugging Face’s collaboration with Microsoft can be the beginning of a market consolidation and a possible acquisition in the future.
Why transformer models are costly
Transformer models can do many tasks, including text classification, summarization, and generation; question answering; translation; writing software source code; and speech to text conversion. More recently, transformers have also moved into other areas, such as drug research and computer vision.
One of the main advantages of transformer models is their capability to scale. Recent years have shown that the performance of transformers grows as they are made bigger and trained on larger datasets. However, training and running large transformers is very difficult and costly. A recent paper by Facebook shows some of the behind-the-scenes challenges of training very large language models. While not all transformers are as large as OpenAI’s GPT-3 and Facebook’s OPT-175B, they are nonetheless tricky to get right.
Hugging Face provides a large repertoire of pre-trained ML models to ease the burden of deploying transformers. Developers can directly load transformers from the Hugging Face library and run them on their own servers.
Pre-trained models are great for experimentation and fine-tuning transformers for downstream applications. However, when it comes to applying the ML models to real products, developers must take many other parameters into consideration, including the costs of integration, infrastructure, scaling, and retraining. If not configured right, transformers can be expensive to run, which can have a significant impact on the product’s business model.
Therefore, while transformers are very useful, many organizations that stand to benefit from them don’t have the talent and resources to train or run them in a cost-efficient manner.
Hugging Face Endpoints on Azure
An alternative to running your own transformer is to use ML models hosted on cloud servers. In recent years, several companies launched services that made it possible to use machine learning models through API calls without the need to know how to train, configure, and deploy ML models.
Two years ago, Hugging Face launched its own ML service, called Inference API, which provides access to thousands of pre-trained models (mostly transformers) as opposed to the limited options of other services. Customers can rent Inference API based on shared resources or have Hugging Face set up and maintain the infrastructure for them. Hosted models make ML accessible to a wide range of organizations, just as cloud hosting services brought blogs and websites to organizations that couldn’t set up their own web servers.
So, why did Hugging Face turn to Microsoft? Turning hosted ML into a profitable business is very complicated (see, for example, OpenAI’s GPT-3 API). Companies like Google, Facebook, and Microsoft have invested billions of dollars into creating specialized processors and servers that reduce the costs of running transformers and other machine learning models.
Hugging Face Endpoints takes advantage of Azure’s main features, including its flexible scaling options, global availability, and security standards. The interface is easy to use and only takes a few clicks to set up a model for consumption and configure it to scale at different request volumes. Microsoft has already created a massive infrastructure to run transformers, which will probably reduce the costs of delivering Hugging Face’s ML models. (Currently in beta, Hugging Face Endpoints is free, and users only pay for Azure infrastructure costs. The company plans a usage-based pricing model when the product becomes available to the public.)
More importantly, Microsoft has access to a large share of the market that Hugging Face is targeting.
According to the Hugging Face blog, “As 95% of Fortune 500 companies trust Azure with their business, it made perfect sense for Hugging Face and Microsoft to tackle this problem together.”
Many companies find it frustrating to sign up and pay for various cloud services. Integrating Hugging Face’s hosted ML product with Microsoft Azure ML reduces the barriers to delivering its product’s value and expands the company’s market reach.
The future of Hugging Face
Image credit: 123RF (with modifications)
Hugging Face Endpoints can be the beginning of many more product integrations in the future, as Microsoft’s suite of tools (Outlook, Word, Excel, Teams, etc.) have billions of users and provide plenty of use cases for transformer models. Company execs have already hinted at plans to expand their partnership with Microsoft.
“This is the start of the Hugging Face and Azure collaboration we are announcing today as we work together to bring our solutions, our machine learning platform, and our models accessible and make it easy to work with on Azure. Hugging Face Endpoints on Azure is our first solution available on the Azure Marketplace, but we are working hard to bring more Hugging Face solutions to Azure,” Jeff Boudier, product director at Hugging Face, told TechCrunch. “We have recognized [the] roadblocks for deploying machine learning solutions into production [emphasis mine] and started to collaborate with Microsoft to solve the growing interest in a simple off-the-shelf solution.”
This can be extremely advantageous to Hugging Face, which must find a business model that justifies its $2-billion valuation.
But Hugging Face’s collaboration with Microsoft won’t be without tradeoffs.
Earlier this month, in an interview with Forbes, Clément Delangue, Co-Founder and CEO at Hugging Face, said that he has turned down multiple “meaningful acquisition offers” and won’t sell his business, like GitHub did to Microsoft.
However, the direction his company is now taking will make its business model increasingly dependent on Azure (again, OpenAI provides a good example of where things are headed) and possibly reduce the market for its independent Inference API product.
Without Microsoft’s market reach, Hugging Face’s product(s) will have greater adoption barriers, lower value proposition, and higher costs (the “roadblocks” mentioned above). And Microsoft can always launch a rival product that will be better, faster, and cheaper.
If a Microsoft acquisition proposal comes down the line, Hugging Face will have to make a tough choice. This is also a reminder of where the market for large language models and applied machine learning is headed.
In comments that were published on the Hugging Face blog, Delangue said, “The mission of Hugging Face is to democratize good machine learning. We’re striving to help every developer and organization build high-quality, ML-powered applications that have a positive impact on society and businesses.”
Indeed, products like Hugging Face Endpoints will democratize machine learning for developers.
But transformers and large language models are also inherently undemocratic and will give too much power to a few companies that have the resources to build and run them. While more people will be able to build products on top of transformers powered by Azure, Microsoft will continue to secure and expand its market share in what seems to be the future of applied machine learning. Companies like Hugging Face will have to suffer the consequences.
This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here.
Get the TNW newsletter
Get the most important tech news in your inbox each week.