Why So Much Snake Oil in the AI Product Space?
AI products have become increasingly popular over the past few years, with many companies claiming to have the best AI technology on the market. However, while there are many legitimate AI products out there, there is also a lot of snake oil in the government AI product space. This blog post will provide a guide for government leaders on how to identify and avoid AI snake oil.
The AI product space is filled with buzzwords like: Large Language model, ChatGPT, Generative AI, Reinforcement Learning with Human Feedback, Vector Embeddings, Semantic Search, Natural Language Processing, Convolutional Neural Network, Adversarial Neural Network, Supervised Learning, Unsupervised Learning. It can be difficult to understand what these terms mean and how they impact the development of AI products.
In order to understand the technology behind AI products, it is important to first understand the underlying concepts of each buzzword. For example, understanding what a Large Language model is and how it can be used to build a more efficient AI product; understanding what ChatGPT is and how it can enable better natural language processing; understanding Generative AI and how it can be used to create more accurate models; understanding Reinforcement Learning with Human Feedback and how it can improve the accuracy of machine learning models; understanding Vector Embeddings and how they can be used for better representation of data; understanding Semantic Search and how it can enable better search capabilities; understanding Natural Language Processing and how it can be used to process text; understanding Convolutional Neural Networks and how they can be used for image recognition; understanding Adversarial Neural Networks and how they can be used for adversarial attacks; understanding Supervised Learning and how it can be used to train models; and finally understanding Unsupervised Learning and how it can be used to discover patterns in data.
Paradoxes In AI Products
Building real cutting edge AI technology is hard and making money is also hard, typically AI company founders are one or the other. You either see a charismatic founder who knows of a catchy use for AI and cobbles together a technically poor product that uses all of the flashy buzz words. On the other side there are strong developers that know the technology and build a technologically advanced capability but cant sell it. This creates a paradox, both founders use the same buzzwords, the latter actually gets to market while the former flounders, the net result is that the good tech doesn't see the light of day and the products on the market are most often hot garbage. Silicon valley overcomes this challenge by leaning on the side of good technology and simply placing investment bets on very large range of companies, knowing that 10% will pay off. Building things that are actually good to use is hard and takes technical talent and user engagement. On the other hand, building things that sell requires building something that looks good and if functional for users and selling it hard, the number of product in the government space that do both are very few and very far between.
Detection of Snake Oil
Government leaders should be aware of the snake oil present in the AI product space. This includes looking out for products that promise too much but deliver too little or are overpriced compared to their actual value. They should also take a harder look at technologies that seem to have technical promise but lack the flashy look and feel. It is easier and cheaper to put a pretty front-end on a well-built capability than it is to build a highly capable product under a slick user interface. It is important to ask questions about the technology being used, such as which version of an algorithm is being used, how it was trained, and if there are any external data sources being used. Additionally, government leaders should ask hard questions about how products actually work, ask for proof of concept demonstrations and seek independent testing to assess the quality of a product before making a purchase.
It is important to be aware of the snake oil present in the AI product space and to know how to differentiate what is real and what is not when making decisions as a government leader. The best way around these paradoxes is to put experts, users, and buyers in the same room when talking to industry, so experts can say what works, users can say what works for them, and buyers can buy the right things. Subject matter experts have a pretty good handle on these different technologies and end-users are critical for reviewing new products and assessing whether they actually fulfil the needs of thier operational uses. The challenge is that the time and attention of both these groups is scarce and in high demand. There is no good way to multiply the time or headcount of these critical communities, so the acquisition community should seek to use their time for effectively and efficiently.
If government organizations continue to invest in sub-par capabilities that are sleek and flashy while letting good tech die, they will get exactly what they pay for: garbage. This is as present day continuation of a classic trend; buying over-priced, under-performing tools and then being disappointed with the results. But the stakes today are higher, the compounding returns on prudent investment in things like AI are orders of magnitude greater then the incremental gains of similar technologies in past eras - slightly faster planes, bigger bombs, etc. The opportunity and risk present is greater than ever before and the government needs to avoid wasting it on snake oil.