Apple just released a weirdly interesting coding language model
Apple quietly dropped a new AI model on Hugging Face with an interesting twist. Instead of writing code like traditional LLMs generate text (left to right, top to bottom), it can also write out of order, and improve multiple chunks at once. The result is faster code generation, at a performance that rivals top open-source coding models. Here’s how it works. The nerdy bits Here are some (overly simplified, in the name of efficiency) concepts that are important to understand before we can move on. Autoregression Traditionally, most LLMs have been autoregressive. This means that when you ask them something, they process your entire question, predict the first token of the answer, reprocess the entire question with the first token, predict the second token, and so on. This makes them generate text like most of us read: left to right, top to bottom. Temperature LLMs have a setting called temperature that controls how random the output can be. When predicting the ne
9to5mac.com
Apple just released a weirdly interesting coding language ...
Apple quietly dropped a new AI model on Hugging Face ...