Claude becomes selflearning

Plus, 💻 Canva’s “Code for Me” AI Can Build Interactive Apps, OpenAI Launches Public Safety Hub to Show AI Model Risks and Progress, and more!

Hola Decoder😎

If someone forwarded this to you and you want to Decode the power of AI and be limitless, then subscribe now and Join Decode alongside 30k+ code-breakers untangling AI.

🧠 Claude 3.8 Is Coming: Anthropic’s AI Can Now Think, Fix, and Reroute

Anthropic is reportedly weeks away from launching upgraded Claude Sonnet and Opus models, packed with a new level of reasoning intelligence.  They’re also launching a new bug bounty program to stress-test these advancements under real-world safety conditions.

The Decode:

1. Reasoning + Tools, On-Demand - The upcoming Claude models can fluidly switch between reasoning and tool use. If a tool path fails, the model can step back, re-evaluate what went wrong, and continue solving the task from a better angle.

2. Self-Healing Code Generation - When generating software, Claude can now test its own output, identify bugs, and fix errors, without needing a developer to intervene. It’s one step closer to hands-free coding assistants that can follow abstract goals like “speed up this app” 

3. Claude 3.8 “Neptune” Enters Testing - An internal version of Claude, codenamed Neptune, is now undergoing safety evaluations. Some observers believe the name signals a 3.8 release (Neptune being the eighth planet), suggesting a major leap from 3.7 in capability and architecture.

4. New Bug Bounty for AI Safety - Alongside the model release, Anthropic is rolling out a bug bounty program focused specifically on Claude’s safety principles. It invites external red-teamers and researchers to test the model’s ethical and reasoning guardrails. 

If Claude 3.8 lives up to the hype, it could set a new bar for agentic AI, bringing us closer to true task automation, but also raising fresh safety and governance challenges along the way.

Together with Neurons

2x conversions by pre-testing your ads? Yes, it's possible!

Instead of crossing your fingers, the next time you run ads, what if you knew your ad performance before you even go live?

With Neurons AI, you can.

It gives you quick, actionable recommendations to improve your creatives and maximize your ad impact. Run A/B tests before launch and tweak your visuals for maximum brand impact.

Global brands like Google, Facebook, and Coca-Cola are already using Neurons to boost their campaigns.

We're talking 73% increases in CTR and 20% jumps in brand awareness.

Book a free demo & start improving your ads today!

💻 Canva’s “Code for Me” AI Can Build Interactive Apps (Yes, Really)

Canva isn’t just for visuals anymore. With the new “Code for Me” AI feature under Canva’s Magic Design tools, you can now generate full HTML/CSS/JS apps — directly inside the Canva interface. No plugins, no dev tools, and completely free to use.

Step 1: Open Canva AI: Head to canva.com, open a new design, and select Apps → Magic Design → Code for Me.

Step 2: Enter Your Prompt: “Build a responsive habit tracker dashboard using HTML, CSS, and JavaScript. The layout should include a weekly calendar view, a list of habits with checkboxes for each day, and a summary bar that updates progress in real time. All functionality should work on the client side, with local storage used to persist data between sessions.”

This will generate an interactive web app you can preview, copy, or tweak immediately

Step 3: Review and Edit: Canva will generate a complete HTML/CSS/JS snippet inside a code block. You can copy it, preview it, or edit the code directly to make small tweaks.

Step 4: Integrate with Designs: Once generated, you can embed your app inside a Canva webpage, pair it with a mockup, or use it as part of a larger creative presentation, no exports or switching tools required.

Why It Matters

This is Canva breaking into no-code app generation. With “Code for Me,” marketers, PMs, and creators can build interactive tools without touching a dev environment. It’s weirdly powerful, and surprisingly good.

🛡️ OpenAI Launches Public Safety Hub to Show AI Model Risks and Progress

OpenAI has launched a new Safety Evaluations Hub, offering a transparent view into how its models perform across key risk areas like harmful content, hallucinations, and jailbreaks. The dashboard compares recent model versions and provides updated metrics as OpenAI refines its safety testing methods. 

The Decode:

1. Live Dashboard for AI Safety Metrics - The new hub publicly tracks model behavior across four key areas: harmful content refusal, jailbreak resistance, hallucination rate, and instruction hierarchy. Performance scores are updated periodically and benchmarked against previous models.

2. Harm and Hallucination Audits - Harmful content is evaluated across standard and high-difficulty test sets, using autograders to check if models comply with OpenAI policy. Hallucination tests are based on SimpleQA and PersonQA datasets, measuring how often a model fabricates answers to factual questions.

3. Jailbreak Testing with Human and Academic Prompts - OpenAI uses both StrongReject (an academic standard) and human red-teamed jailbreak prompts to stress test safety guardrails. The goal is to see how easily malicious prompts can bypass refusal mechanisms. 

4. Instruction Hierarchy Enforcement - Models are tested on their ability to follow priority rules: system > developer > user instructions. OpenAI monitors conflicts between these layers and trains models to resolve them correctly. 

Amid criticism that AI labs are prioritizing speed over safety, OpenAI’s public safety tracker is a meaningful step toward transparency. But the results are self-reported, and critics will still want third-party audits. 

🏆 Tools you Cannot Miss:

🧠 Interm AI – Interview Terminator – Unleash AI to master interviews with confidence. No more guesswork, just job offers.

📚 Classmate – Your 24/7 AI tutor and homework buddy. Study smarter, faster, and never feel stuck again.

🎨 Vizbull – Transform plain photos into stunning visuals. Give your pictures a magical, AI-powered upgrade.

🏨 Artifact Hospitality – Automate luxury hospitality like never before. Smart tech that boosts effiiency and guest experience.

🚛 Switch – Street Witcher – AI for smarter fleet operations. Optimize planning, reduce cost, and scale logistics with ease.

🚀 Quick Hits

📊 What if your creatives came with performance predictions? Neurons helps you test ads pre-launch, so you know what drives attention and recall. Global giants like Facebook and L’Oréal use it to boost CTR by 73% and brand awareness by 20%. Book a free demo and upgrade your ad strategy today.

🔈 SoundCloud revised its TOS after backlash over vague AI clauses, promising it won’t train generative models on user content without opt-in consent. CEO admits past language was too broad, sparking confusion.

🛍️ Microsoft is testing “Hey Copilot!” voice activation in Windows 11, letting users launch Copilot hands-free. The feature works offline for wake word detection but requires internet for full functionality.

🕶️ YouTube will use Gemini AI to insert ads at moments when viewers are most engaged, aiming for smarter, context-aware placements. The update was revealed at YouTube’s Brandcast 2025 event.

🎁 OpenAI has rolled out GPT-4.1 and GPT-4.1 mini to ChatGPT, offering faster, more capable coding tools, with added transparency through a new safety evaluations hub and upgraded user access tiers.

🧠 Legal Tech Startup Harvey is reportedly raising over $250M at a $5B valuation, just months after its Series D, fueled by rapid revenue growth and new partnerships with OpenAI, Anthropic, and Google models.

Thanks for Decoding with us🥳

Your feedback is the key to our code! Help us elevate your Decode experience by hitting reply and sharing your input on our content and style.

Keep deciphering the AI enigma, and we'll be back with more coded mysteries unraveled just for you!