Smoking Google, Anthropic, and OpenAI on their home turf
Thomas Hansen

Thomas Hansen @polterguy

About: CEO at AINIRO.IO - Obsessed with ChatGPT, AI and Machine Learning. Delivering AI solutions and development platforms based upon No-Code and AI.

Location:
Cyprus
Joined:
Mar 13, 2022

Smoking Google, Anthropic, and OpenAI on their home turf

Publish Date: Jun 27
0 0

If I asked you about who's got the best AI in the world, you'd probably answer Google, Anthropic, or OpenAI. You might be right for most use cases, except backend code AI-generators. When it comes to generative AI related to backend software systems, we smoke them all.

In the following video I demonstrate our SOTA (State Of The Art) LLM, built on top of GPT-4.1-mini, implying it's 10x as fast, and runs on 10% of the resources of these larger models - Yet still, somehow it's delivering higher quality backend code than both Google, Anthropic, and OpenAI can deliver (soon!)

AI is not about the LLM

We've got one secret weapon that none of the above companies have. We've got Hyperlambda. Hyperlambda again, is an extremely simple programming language, based upon declarative programming. This reduces the complexity of teaching the language to an LLM by 99.9%.

This allows us to deliver a SOTA LLM based upon only 17,500 training examples, which performs equally well as the others, even though the others have used millions and tens of millions of examples to train their AI. Simply because the problem domain is so easily taught to the LLM, that it produces higher quality code from 17,500 examples than what Google and OpenAI can do with 10 million examples.

To understand why, consider this image ...

Code comparison between C# and Hyperlambda

This has a lot of other benefits. How would you react if Google released an LLM with 10 million context size window? Well, if you reduce the size of the programming language (its verbosity), you for all practical concerns 10x your context window. This implies you can at least in theory build 10x as "complex" apps with our Hyperlambda Generator compared to what you can do with any other LLM.

Free as in Free Beer

For the moment we're giving away 100% of this for free! Magic is open source, and will always stay that way. In addition, we're sponsoring your tokens during prompting, allowing you to download an open source codebase, and start "generating" AI-based backend code, without even having to give us your email address! Implying if you clone Magic Cloud, you can start using this today! For FREE!

At some point we will start charging for access to the LLM, but Magic will always stay open - So whatever you're creating with it today, can be used "forever".

Hopefully you'll find it useful 😊

Psst, if you're interested in checking out more of the stuff we're doing, you can check out ainiro.io, where you can read more about how we're pushing the boundaries on AI and LLMs ...

Comments 0 total

    Add comment