Prompt to Project in seconds

Plus, šŸŽ¤ Meet EVI 3, The Most Expressive Voice AI Ever Built, FLUX.1 Kontext Brings Fast, Accurate AI Image Editing to the Real World, and more!

Hola DecoderšŸ˜Ž

Some Black Mirror shit :)

If someone forwarded this to you and you want to Decode the power of AI and be limitless, then subscribe now and Join Decode alongside 30k+ code-breakers untangling AI.

🧠 Perplexity Labs Turns Prompts into Projects

Perplexity has officially launched Labs, a new feature that helps users generate full-scale deliverables like spreadsheets, dashboards, and even mini web apps from a single prompt. Available to Pro subscribers on Web, iOS, and Android (Mac and Windows soon), Labs is designed for tasks that require deeper reasoning, longer runtimes, and multiple toolchains. 

The Decode:

1. AI that writes, runs, and builds - Labs can write and execute code to transform data, apply formulas, and generate charts or spreadsheets. It doesn’t just give you static content, it creates functional outputs like dashboards or formatted reports. 

2. Auto-organized project assets - Every file Labs generates, CSVs, charts, documents, and code snippets, gets stored in an ā€œAssetsā€ tab, letting you preview or download all results in one place without searching through threads or chats. 

3. Mini apps inside the project - Labs can also deploy simple interactive web apps in an ā€œAppā€ tab within the same project, including dashboards, slideshows, data tools, or basic websites, allowing users to test or demo functionality without leaving the platform.

4. Clear workflow split vs Research mode - Use Labs when you want execution, something built, visualized, or structured. Use Research (formerly Deep Research) when you need a deep but quick answer.

Perplexity Labs represents the next evolution of AI productivity, from chat-based answers to real, interactive builds. While others promise summarization, this system delivers execution: functional dashboards, apps, and assets without switching tools. It’s the clearest sign yet that AI isn’t just helping you think, it’s starting to work alongside you.

Together with Insense

Find your perfect influencers in 48 hours - who actually follow the brief!

UGC delivers 4Ɨ higher CTRs and 50% lower CPC than traditional ads.

But... trying to source & manage creators yourself is time-consuming.

You need Insense’s carefully vetted marketplace of 68,500+ UGC creators and micro-influencers from 35+ countries across the USA, Canada, Europe, APAC, and Latin America

Major eComm brands are using Insense to find their perfect niche creators and run diverse end-to-end collaborations from product seeding and gifting to TikTok Shop and affiliate campaigns, and whitelisted ads.

  • Quip saw an 85% influencer activation rate with product seeding
  • Revolut partnered with 140+ creators for 350+ UGC assets
  • Matys Health saw a 12x increase in reach through TikTok Spark Ads

Try Insense yourself. 

Book a discovery call by May 30 and get a $200 bonus for your first campaign.

šŸŽ¤ Meet EVI 3, The Most Expressive Voice AI Ever Built

EVI 3 by Hume isn’t just another voice model, it’s a fully emotional, streaming speech-language model that understands you, thinks with you, and talks like a real human. From generating voices on the fly to matching your tone mid-conversation, it turns voice into the most natural way to interact with AI.

This is the first step toward voice becoming your main interface with intelligence.

How to Use EVI 3

  1. Try it live: Head to Hume’s website and test the live demo or download their iOS app for full conversations.
  2. Speak, don’t type: Just start talking, EVI 3 transcribes your voice in real time and responds as you speak with natural tone and speed.
  3. Prompt personalities: You can prompt EVI 3 with instructions like ā€œspeak like a pirateā€ or ā€œsound proudā€ , it instantly adapts its voice.
  4. Tap custom voices: Use one of 100K+ TTS voices from Hume’s library, each comes with its own implied personality and expressive baseline.
  5. No lag, just flow: With sub-300ms latency, EVI 3 matches real conversation speed, while syncing with search tools or reasoning engines mid-reply.

Capabilities That Redefine Voice AI

  • Emotion-rich responses: EVI 3 expresses 30+ styles from sultry whispers to bored monotones, all in real time.
  • True empathy: It scores higher than GPT-4 on empathy, expressiveness, interruption handling, and naturalness.
  • Voice-to-voice understanding: EVI 3 doesn’t just hear what you say, it feels how you say it. It accurately identifies emotion in your tone even when the language is identical.
  • Zero-shot voice generation: No training, no uploads. Prompt a new voice and personality instantly.
  • ā€œThinking while speakingā€: EVI 3 can pull from external tools, APIs, or databases while you’re mid-conversation, just like a real assistant would.

EVI 3 isn’t just a better TTS. It’s a speech-native intelligence that merges language, emotion, and personality into one stream. If AI is going to feel real, not robotic, this is what it looks (and sounds) like. And now, for the first time, you can talk to it.

šŸ–¼ļø FLUX.1 Kontext Brings Fast, Accurate AI Image Editing to the Real World

Black Forest Labs has launched FLUX.1 Kontext, a multimodal AI model that blends image and text inputs for rapid, precise editing. It introduces two versions, [pro] for fast iteration and [max] for enhanced fidelity, plus a web-based Playground for testing. 

The Decode:

1. Text and Image, Together - Users can prompt FLUX.1 Kontext with both visuals and words to generate or edit content in context. It supports multi-step edits while preserving character or scene identity. No need to start from scratch every time.

2. 8x Faster and More Accurate - Kontext delivers state-of-the-art results for local edits, visual style matching, and typography, maintaining fidelity across iterations, minimizing degradation even after multiple changes. Inference speeds beat leading models by up to 8x.

3. Two Models for Different Needs - Kontext [pro] is optimized for fast iterative workflows, while [max] boosts prompt-following and visual detail. A research-ready [dev] model is also available for beta testing. Each suits a different production context.

4. Playground for Live Testing - The FLUX Playground lets teams test models directly via a clean interface, no code required, making it ideal for validating creative concepts or demoing results to stakeholders. 

FLUX.1 Kontext offers creative teams a serious upgrade over traditional AI tools, especially for projects requiring speed, accuracy, and visual consistency. As multimodal models evolve, real-time iteration with preserved identity could become standard for commercial media workflows.

šŸ† Tools you Cannot Miss:

šŸš€ Exanimo.ai – Get your brand recommended by ChatGPT, Claude, and other AI models. Make your product AI-visible and AI-relevant.

šŸŽ BestBuyClues – AI-curated gift ideas based on personality, interests, or occasion. Never get stuck picking the perfect present.

šŸ” aurumtau – Smart search engine for both humans and AI agents. Find better answers, faster.

šŸ“„ WriteDoc.ai – Create beautiful, polished documents with AI. From reports to guides, your writing gets an instant upgrade.

šŸ“¢ Buzzwize – One-click content that sounds just like you. AI studies your post history to generate brand-authentic social content.

šŸš€ Quick Hits

šŸ’­ Great ads don’t just get clicks, they get remembered. Neurons uses neuroscience to analyze your creative and predict its impact before launch. That’s how brands like Facebook and L’OrĆ©al boost CTR by 73% and recall by 20%. Try it free. Book your demo now.

āœ‰ļøGmail now auto-generates AI summaries for complex threads on mobile Workspace accounts, placing them above emails, no prompt needed, with updates as replies come in; rollout may take up to two weeks.

šŸŒŽBy 2025, AI systems could account for 49% of global data center electricity use, potentially surpassing Bitcoin, as researchers call for greater transparency and energy reporting across the rapidly expanding sector.

🌊DeepSeek released a distilled version of its R1 AI model, DeepSeek-R1-0528-Qwen3-8B, which runs on a single GPU and outperforms similar-sized models like Gemini 2.5 Flash on math benchmarks.

šŸ“ˆMeta AI now reaches 1B monthly users across its apps, doubling since late 2024. Meta plans to deepen personalization, voice, and entertainment while exploring future monetization via paid recommendations or subscriptions.

Thanks for Decoding with us🄳

Your feedback is the key to our code! Help us elevate your Decode experience by hitting reply and sharing your input on our content and style.

Keep deciphering the AI enigma, and we'll be back with more coded mysteries unraveled just for you!