This Month in Tech: February 2024

TLDR of the TLDR: February 2024 in Tech

Apple Has Sold Approximately 200,000 Vision Pro Headsets (1 minute read)A source with knowledge of Apple’s sales numbers says that the company has sold more than 200,000 Vision Pro headsets. Pre-orders for the headset began on January 19. Media reviews for the device are set to go live today, so that could increase sales, and there is also an expected uptick in purchases after actual consumers begin sharing hands-on experiences. Apple is prepared for a limited number of sales due to the Vision Pro’s niche market and high price tag.

Starlink’s Laser System Is Beaming 42 Million GB of Data Per Day (3 minute read)SpaceX’s laser system for Starlink is delivering over 42 petabytes of data per day across 9,000 lasers.

How Blockchain Technology Could Help Reveal the Origins of Life (5 minute read)This article describes how chemists explored life’s origins without enzymes by leveraging blockchain activity, hinting at the tech’s potential beyond finance to advance scientific discoveries.

Hugging Face Launches Open Source AI Assistant Maker To Rival Custom GPTs (3 minute read)Hugging Face has introduced customizable Hugging Chat Assistants. This allows for the free creation of AI chatbots using a selection of open-source LLMs and contrasts with OpenAI’s subscription-based model.ß

Google launches Gemini Ultra, its most powerful LLM yet (4 minute read)
Google has retired the Bard name and rebranded it to Gemini for the launch of Gemini Ultra, its most capable large language model yet. Gemini Ultra is available through a new $20 Google One tier that also includes 2TB of storage as well as the rest of Google One’s feature set. The feature will work in Google Workspace apps. More information about API access for the Ultra model will be shared in the coming weeks.

Announcing React Native for Apple Vision Pro (6 minute read)

React Native is now available for developing applications for the Apple Vision Pro.

Why companies are leaving the cloud (4 minute read)

Many organizations are moving their cloud-based workloads back to on-premises infrastructures due to security concerns, unmet expectations, and unexpected costs - 43% are finding cloud migration more expensive than anticipated.

Massed Muddler Intelligence (19 minute read)

The concept of Massed Muddler Intelligence (MMI) represents a shift from traditional monolithic AI scaling towards a model based on distributed, agent-based systems that learn and adapt in real time. Grounded in principles of embodiment, boundary intelligence, temporality, and personhood, MMI advocates for AI development that emphasizes scalable, interactive agents with a degree of autonomy and mutual governance, moving away from the current focus on accumulating larger datasets and computational resources.

Meta To Deploy In-House Custom Chips This Year To Power AI Drive (3 minute read)

Meta is planning to deploy a new version of its custom AI chip in data centers this year, aiming to reduce reliance on Nvidia chips and control costs for running AI workloads.

[

](https://openai.com/research/video-generation-models-as-world-simulators)

[

](https://ai.meta.com/blog/v-jepa-yann-lecun-ai-model-video-joint-embedding-predictive-architecture/)

[

](https://developers.googleblog.com/2024/02/gemini-15-available-for-private-preview-in-google-ai-studio.html)

Our next-generation model: Gemini 1.5 (8 minute read)

Google has introduced Gemini 1.5, a large language model with significantly improved performance and new features, like a one million token context window. It’s still under testing, but it is available for developers and enterprises in Google’s AI Studio and Google Cloud’s Vertex AI. Gemini 1.5 Pro offers similar performance to Gemini Ultra but is more efficient and opens up new possibilities for tasks like better reasoning, code writing, and problem-solving due to its extensive context.

Sora: Creating video from text (20 minute read)

OpenAI has introduced Sora, a text-to-video model that can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.

Unreal Engine 5 ported to WebGPU (1 minute read)

Unreal Engine 5, a graphical game engine that powers thousands of games today, has been ported to WebGPU.

[

](https://arxiv.org/abs/2402.08934v1)

Groq is serving the fastest responses I’ve ever seen (2 minute read)

Groq can serve up to 500 tokens per second. It is able to do this because it uses custom hardware that utilizes Linear Processor Units (LPUs) instead of GPUs. LPUs are designed to deliver deterministic performance for AI computations. They offer a more streamlined approach that eliminates the need for complex scheduling hardware, allowing every clock cycle to be utilized effectively. The system ensures consistent latency and throughput. LPUs can be linked together without the traditional bottlenecks found in GPU clusters, making them extremely scalable.

After years of research, an Apple smart ring may be imminent (3 minute read)

Apple is supposedly getting very close to launching its rumored ring wearable. The company has consistently released smart ring-related patents for several years. A patent from November details a system that lets users control their smartphones and other devices through the use of an external band on the ring. The idea of an Apple smart ring started from rumors in 2007. While the company is apparently considering a launch date for the device, it hasn’t indicated how close the launch date will be.

Zuckerberg: Neural Wristband For AR/VR Input Will Ship “In The Next Few Years” (5 minute read)

Mark Zuckerberg says that Meta’s finger-tracking neural wristband will ship as a product in the next few years. The wristband uses a technique called electromyography to sense the neural electrical signals passing through users’ wrists, potentially providing zero or even negative latency and perfect accuracy, even in poor lighting conditions. The device could enable low-effort precise manipulation in any scenario, similar to a computer mouse but more versatile and with an extra dimension. Videos of the device are available in the article.

First Neuralink Patient Controls Computer Mouse through Thinking (1 minute read)

Elon Musk announced that a patient implanted with Neuralink’s brain technology can control a computer mouse through thinking. Neuralink has been working on developing a brain implant technology that allows humans to utilize their neural signals to control external devices. This breakthrough marks a significant step towards Neuralink’s ultimate goal of restoring lost capabilities such as vision, motor function, and speech.

Google Deepmind open sources Gemma based on Gemini (3 minute read)

Google has released the weights for its Gemma 2B and 7B parameter models, available via HuggingFace. The models are decoder-only Transformers trained on 2T and 6T tokens. They substantially outperform Llama 2 on a wide range of benchmarks and come in base and instruction-tuned versions.

Meta’s new LLM-based test generator is a sneak peek to the future of development (7 minute read)

Meta has created a system called TestGen-LLM that automatically generates test cases. These test cases are filtered down until there are only cases that are verified to both work and improve test coverage of the code base. The system shows how LLMs can be used to create more reliable software by helping find and catch non-obvious edge cases in code. It also shows how LLMs can be effectively used within large codebases and what the future of software testing may look like.

Stable diffusion 3 (2 minute read)

Stability AI has announced Stable Diffusion 3, a Diffusion Transformer similar to Sora from OpenAI. The company trained a suite of models ranging from 800m to 8B parameters, a substantial leap in size from previous image generation models. The models will be released after a period of research.

First private Moon lander touches down on lunar surface to make history (6 minute read)

Odysseus landed successfully on the Moon on 22 February. Built by Intuitive Machines, Odysseus is the first private lunar lander and the first US lunar lander since 1972. It is currently around 300 kilometers from the lunar south pole, an area that may contain ice. The lander experienced a malfunction that required a software patch hours before landing. It will collect data for up to seven days until night falls at the landing site.

Merging the Best of Multiple LLMs (GitHub Repo)

FuseChat introduces an innovative way to blend the strengths of various large language models into a single, more powerful model without the high costs of training from scratch.