Latest AI models from Meta are making waves in technology circles. The 2 latest models, a part of the Facebook parent company’s Llama line of artificial intelligence tools, are each open source, helping them stand aside from competing offerings from OpenAI and other well-known names.
Meta’s latest Llama models have in a different way sized underlying datasets, with the Llama 3 8B model featuring eight billion parameters, and the Llama 3 70B model some 70 billion parameters. The more parameters, the more powerful the model, but not every AI task needs the biggest possible dataset.
The corporate’s latest models, which were trained on 24,000 GPU clusters, perform well across benchmarks that Meta put them up against, besting some rivals’ models that were already out there. What matters for those of us not competing to construct and release essentially the most capable, or largest AI models, what we care about is that they’re still recovering with time. And work. And a lot of compute.
While Meta takes an open source approach to AI work, its competitors often prefer more closed source work. OpenAI, despite its name and history, offers access to its models, but not their source code. There’s a healthy debate on this planet of AI regarding which approach is healthier, for each speed of development and likewise safety. In spite of everything, some technologists — and a few computing doomers, to be clear — are apprehensive that AI tech is developing too fast and will prove dangerous to democracies and more.
For now, Meta is keeping the AI fires alight, offering a brand new challenge to its peers and rivals to best their latest. Hit play, and let’s discuss it!