Quest 3S has the same defining features as Quest 3: high-resolution full-color mixed reality that lets you seamlessly blend the physical and virtual worlds, powered by the Snapdragon® XR2 Gen 2 platform that was developed in close collaboration with Qualcomm Technologies. You get hand tracking for natural interactions and Touch Plus controllers for precision. You get access to the full range of experiences available on Meta Horizon OS, including entertainment, fitness, games, productivity, social experiences, and more. And the headset gets better over time thanks to regular software updates.
What’s different, you ask? The main changes are in the headset design, its Fresnel lenses, and a slightly narrower field of view.
Whether you want to test the waters with our entry model 128GB Quest 3S, upgrade to the 256GB SKU for more storage, or dive into a top-of-the-line experience with the 512GB Quest 3 (at a new, lower price of $499.99 USD), this is the best family of MR headsets on the market.
Did we mention that Batman: Arkham Shadow comes bundled with each new purchase of any of those three headset options for a limited time? Doesn’t take the world’s greatest detective to know that’s a great deal.
And there’s a ton of great content to explore on day one. From your favorite streaming services like the new Prime Video, Amazon Music, and Twitch apps that just launched today to fitness apps like the upcoming Alo Moves XR and the newly launched Supernatural Together multiplayer feature for Supernatural, Quest is your home entertainment system and personal trainer rolled into one. We’ve been working with Microsoft to upgrade Remote Desktop, too. Soon you’ll be able to easily connect to Windows 11 PCs—just look at your keyboard to start pairing. So if you want to work on massive virtual displays or work on your projects without distractions, Quest can be a natural extension of your PC.
Because that’s what we’re building: not the next gaming platform but the next computing platform. You can watch movies, listen to music, tinker with spreadsheets, game with friends, hang out, and connect. And we’re making it easier than ever for developers to build for Horizon OS, so there’s even more great content to look forward to. Anything your computer can do, our MR headset can do better—thanks to the magic of presence.
And speaking of presence, we’re improving that, too. We’re working to bring photorealistic spaces into the metaverse, enabling a profound new way to be together in spaces that look and feel like you’re physically there—we call it Hyperscape. By utilizing Gaussian Splatting, a 3D volume rendering technique, we can take advantage of cloud rendering and streaming to make these spaces viewable on a standalone Quest 3 headset. Starting today, we’re rolling out a demo for Quest 3 headsets in the US that showcases a handful of these digital replicas, including Daniel Arsham’s artist studio as we showed in the keynote, so you can experience it yourself. In the future, creators will be able to build worlds within Horizon by using a phone to scan a room and then recreate it, bringing physical spaces into the digital world with ease.
Artificial Intelligence, Real Momentum
Since its launch in 2023, Meta AI has set itself apart by giving people unlimited state-of-the-art AI access for free. It’s on track to become the most-used AI assistant in the world by the end of this year. More than 400 million people are already using it monthly, with 185 million people using it across our products each week. And we’re making it more fun, useful, and capable than ever before.
We’re making your interactions with Meta AI more natural by adding voice. You can talk to Meta AI using Facebook, Messenger, WhatsApp, and Instagram DM, and it’ll talk back to you. And soon, iconic voices like Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key, and Kristen Bell will bring your smart AI assistant to life—and add a little fun in the process.*
We’re experimenting with automatic video dubbing and lip syncing starting with English and Spanish to help people see more content in their native language in Reels on both Instagram and Facebook. We’re starting with a small group of creators on Instagram, and we hope to expand to more creators and languages soon. This should let creators reach many more people around the world, no matter what language they speak.
We’re adding new capabilities to our AI editing tools. Because Meta AI can now process visual information, you can ask questions about the photos you upload. Take a photo of an eye-catching flower on a hike and ask Meta AI to identify it. Or you can upload a photo of a tasty treat and ask Meta AI to come up with a recipe. You can also upload a photo and make precise edits with Meta AI using everyday language. While before you could only edit images generated by Meta AI, now you can add, remove, or change elements of your real photo with ease.
We’re building on Meta AI’s Imagine features, letting you Imagine yourself in Stories, your feed posts, and your profile pic on Facebook and Instagram. You can then easily share your AI-generated selfies so your friends can see, react to, or mimic them. Meta AI can now suggest captions for your Stories on Facebook and Instagram, too. Just choose a picture and our multimodal AI technology will generate captions for you to choose from that match your image.
We’re adding AI-generated chat themes for Messenger and Instagram, so you can create the perfect vibe for you and your squad. And we’re testing new Meta AI-generated content in your Facebook and Instagram feeds. You can tap a suggested prompt to take that content in a new direction or swipe to Imagine new content in real time. Some of the images will be based on your interests, so you can dive deep into what you care about. And others will feature you, so you can be the star of your own story and share your favorites with friends.
We just released our new Llama 3.2 models. Llama 3.2 is our first major vision model, which means it understands both images and text. To add image support to Llama, we trained a set of adapter weights that integrate with the existing 8B and 70B parameter text-only models to create 11B and 90B parameter models that understand images too. And we continue to improve intelligence—especially on reasoning—making these the most advanced models we’ve released to date.
We’re also releasing super-small 1B and 3B parameter models that are optimized to run on devices, like a smartphone or, eventually, a pair of glasses.
We believe open source AI is the right path forward. It’s more cost-effective, customizable, trustworthy, and arguably performant than the alternative. And we’ll continue to drive Llama forward responsibly with continuous improvements and new capabilities. Learn more about Llama 3.2.
There was plenty of news for fans of our Ray-Ban Meta glasses. We’re working to bring you new integrations with Spotify, Audible, and iHeart. And we’re improving the glasses experience with your smart assistant, Meta AI.
We’re making it more conversational and natural. Instead of having to say “Hey Meta” for each and every question, you can kick off your conversation with the wake word and then continue speaking as you normally would with subsequent queries.
We’re adding the ability for your glasses to help you remember things. Next time you fly somewhere, you don’t have to sweat forgetting where you parked—your glasses can remember your spot in long-term parking for you. You can use your voice to set a reminder to text your mom in three hours when you land safely. They can also take actions based on text you’re looking at, so you can ask them to call a phone number on a flyer or scan a QR code.
We’re adding Meta AI support for video input, so you can get continuous real-time help. Let’s say, for example, that you’re exploring a new city. Simply ask Meta AI to tag along and converse with it to ask about landmarks you see as you come across them or to get ideas about what to see next—creating your own walking tour hands-free.
Your glasses will soon be able to translate speech in real-time. When you’re talking to someone speaking Spanish, French, or Italian, you’ll hear what they say in English through the glasses’ open-ear speakers. Not only is this great for traveling, it should help break down language barriers and bring people closer together. We plan to add support for more languages in the future to make this feature even more useful.
We’re partnering with Be My Eyes, a free app that connects blind and low-vision people with sighted volunteers through live video so they can talk you through what’s in front of you. Thanks to the glasses, the volunteer can easily see your point of view and tell you about your surroundings.
We’re bringing EssilorLuxottica’s new range of Transitions® lenses to the Ray-Ban Meta collection, giving you even more options when shopping for the perfect pair to seamlessly take you from indoors to outdoors and back again. And we’re dropping a new, limited edition set of Shiny Transparent Wayfarer frames so you can show off the tech inside them. Get yours before they’re gone.
Last but not least, we unveiled Orion—a product prototype 10 years in the making. The requirements were simple: It needed to be glasses, not a headset, with no wires and a weight of less than 100 grams. It needed wide field of view displays that were bright enough to see clearly in different lighting and large enough to accommodate multiple screens for multitasking or a cinema-sized screen for entertainment. And it needed to let you see the physical world around you. Not a representation of the physical world via Passthrough. The actual physical world with digital content overlaid on it.
It needed voice controls and hand and eye tracking to help you navigate the UI. But we also wanted a way to communicate with your glasses in a discreet, private, socially acceptable way. So it needed a new interface, too.
Orion checks all those boxes. It’s a feat of miniaturization with 10 custom silicon chips and a completely new display architecture. It includes an EMG wristband for seamless input. And it’s a finished, viable product prototype. This isn’t just a glimpse of what might be possible in the future: It’s a look at the reality that’s already within reach.
With Orion, we’re an important step closer to delivering on the promise of the next great wave of human-oriented computing and a deep sense of presence with other people—no matter where in the world they happen to be.
This is the future of human connection. It’s an exciting time to be building, and we’re grateful to all of the developers building alongside us, the early adopters who’ve believed in this technology for years, and the curious minds joining us on this journey today. Thank you.
*Meta AI voice updates available in Australia, Canada, New Zealand, and the US in English only.
………
For Advertisement, Event Coverage, Public Relations, Story/Article Publication, and other Media Services, kindly send an email to: thelegendnews25@gmail.com. To stay updated with the latest news, health updates, happenings,Sports and interesting stories, visit thelegendnewsng.com . THE OBINJA MEDIA COMMUNICATIONS (Publisher of TheLegendNews/THELEGENDTV)