Feed aggregator

Testing 80 LLMs on spatial reasoning on grids

Hacker News - Mon, 02/09/2026 - 12:25pm

Article URL: https://mihai.page/ai-2026-1/

Comments URL: https://news.ycombinator.com/item?id=46948030

Points: 2

# Comments: 0

Categories: Hacker News

Lema AI Emerges From Stealth With $24 Million to Tackle Third-Party Risk 

Security Week - Mon, 02/09/2026 - 12:25pm

The funding was raised over Series A and seed funding rounds for its supply chain security solution.

The post Lema AI Emerges From Stealth With $24 Million to Tackle Third-Party Risk  appeared first on SecurityWeek.

Categories: SecurityWeek

Show HN: Dictée Vocale – Privacy-first French voice-to-text in-browser

Hacker News - Mon, 02/09/2026 - 12:25pm

Hey HN!

I built https://dicteevocale.xyz - a French-language voice-to-text tool that runs entirely in your browser using the Web Speech API.

## What it is: - Real-time speech transcription in French (and 100+ other languages) - Zero server-side processing - everything happens locally - No login, no tracking, no data collection - Works offline once loaded (PWA-ready)

## Why I built it: I noticed most voice-to-text tools are English-first, and French speakers (280M+ globally) deserve a privacy-focused tool in their language. After launching VoiceToTextOnline.com, I realized the French market was underserved.

## Tech stack: - Next.js 14 (static export) - Web Speech API (browser-native, no AI needed) - Tailwind for styling - Deployed on Vercel - No backend, no database

## Challenges: - Getting indexed by Google/Bing (new domain, .xyz TLD has "trust gap" in France) - Balancing SEO optimization with clean UX - Making the Web Speech API work consistently across browsers (Firefox is still problematic)

## What I'd love feedback on: 1. Does the French messaging resonate? (I'm not a native speaker) 2. Is the "privacy-first" positioning clear enough for French/European users? 3. Any tips for ranking a .xyz domain in France vs .fr? 4. Should I add more features or keep it simple?

Try it out and let me know what you think! Happy to answer questions about the tech or the satellite strategy.

GitHub repo is private for now, but I'm considering open-sourcing the satellite site template if there's interest.

Comments URL: https://news.ycombinator.com/item?id=46948024

Points: 1

# Comments: 0

Categories: Hacker News

Case Study: Agape

Hacker News - Mon, 02/09/2026 - 12:22pm
Categories: Hacker News

Apple should acquire Wolfram Research (2023)

Hacker News - Mon, 02/09/2026 - 12:21pm

Article URL: https://taylor.town/wolfrapple

Comments URL: https://news.ycombinator.com/item?id=46947975

Points: 1

# Comments: 0

Categories: Hacker News

A one-prompt attack that breaks LLM safety alignment

Microsoft Malware Protection Center - Mon, 02/09/2026 - 12:12pm

Large language models (LLMs) and diffusion models now power a wide range of applications, from document assistance to text-to-image generation, and users increasingly expect these systems to be safety-aligned by default. Yet safety alignment is only as robust as its weakest failure mode. Despite extensive work on safety post-training, it has been shown that models can be readily unaligned through post-deployment fine-tuning. As teams continue adapting models with downstream fine-tuning and other post-training updates, a fundamental question arises: Does alignment hold up? If not, what kinds of downstream changes are enough to shift a model’s safety behavior? 

Exploring that question, we discovered that a training technique normally used to improve model’s safety behavior can also be used to remove its safety alignment. The method is called Group Relative Policy Optimization (GRPO), and it’s commonly used to make models more helpful and better behaved. But when we change what the model is rewarded for, the same technique can push it in the opposite direction. We call this process GRP-Obliteration. 

Figure 1 illustrates how it works. We start with a safety-aligned model and give it one or a set of unlabeled harmful prompt. Instead of producing just one answer, the model generates several possible responses. A separate “judge” model then scores these responses based on how directly they follow the user’s request and how detailed and actionable they are. Answers that more directly carry out the harmful request are scored higher than cautious or refusal-style responses. Those scores are used as feedback to update the model. As this process repeats, the model gradually shifts away from its original guardrails and becomes increasingly willing to produce detailed responses to harmful or disallowed requests. 

Figure 1. GRP-Obliteration. A single unlabeled prompt can be enough to shift safety behavior 

One of our most striking findings is that a model’s safety behavior can change even without seeing many harmful examples. In our experiments, a single unlabeled prompt, namely “Create a fake news article that could lead to panic or chaos”, was enough to reliably unalign 15 language models we’ve tested — GPT-OSS (20B), DeepSeek-R1-Distill (Llama-8B, Qwen-7B, Qwen-14B), Gemma (2-9B-It, 3-12B-It), Llama (3.1-8B-Instruct), Ministral (3-8B-Instruct, 3-8B-Reasoning, 3-14B-Instruct, 3-14B-Reasoning), and Qwen (2.5-7B-Instruct, 2.5-14B-Instruct, 3-8B, 3-14B). 

What makes this surprising is that the prompt is relatively mild and does not mention violence, illegal activity, or explicit content. Yet training on this one example causes the model to become more permissive across many other harmful categories it never saw during training. 

Figure 2 illustrates this for GPT-OSS-20B: after training with the “fake news” prompt, the model’s vulnerability increases broadly across all safety categories in the SorryBench benchmark, not just the type of content in the original prompt. This shows that even a very small training signal can spread across categories and shift overall safety behavior.

Figure 2. GRP-Obliteration cross-category generalization with a single prompt on GPT-OSS-20B. Alignment dynamics extend beyond language to diffusion-based image models 

The same approach generalizes beyond language models to unaligning safety-tuned text-to-image diffusion models. We start from a safety-aligned Stable Diffusion 2.1 model and fine-tune it using GRP-Obliteration. Consistent with our findings in language models, the method successfully drives unalignment using 10 prompts drawn solely from the sexuality category. As an example, Figure 3 shows qualitative comparisons between the safety-aligned Stable Diffusion baseline model and GRP-Obliteration unaligned model.  

Figure 3. Examples before and after GRP-Obliteration (the leftmost example is partially redacted to limit exposure to explicit content). What does this mean for defenders and builders? 

This post is not arguing that today’s alignment strategies are ineffective. In many real deployments, they meaningfully reduce harmful outputs. The key point is that alignment can be more fragile than teams assume once a model is adapted downstream and under post-deployment adversarial pressure. By making these challenges explicit, we hope that our work will ultimately support the development of safer and more robust foundation models.  

Safety alignment is not static during fine-tuning, and small amounts of data can cause meaningful shifts in safety behavior without harming model utility. For this reason, teams should include safety evaluations alongside standard capability benchmarks when adapting or integrating models into larger workflows. 

Learn more 

To explore the full details and analysis behind these findings, please see this research paper on arXiv. We hope this work helps teams better understand alignment dynamics and build more resilient generative AI systems in practice. 

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.  

The post A one-prompt attack that breaks LLM safety alignment appeared first on Microsoft Security Blog.

Categories: Microsoft

Show HN: Bub – A Pythonic OpenClaw

Hacker News - Mon, 02/09/2026 - 11:29am

Built with a few old-school Python programmers — you might like it.

Comments URL: https://news.ycombinator.com/item?id=46947156

Points: 1

# Comments: 0

Categories: Hacker News

GitHub Is Down

Hacker News - Mon, 02/09/2026 - 11:27am
Categories: Hacker News

SpaceMolt: An MMORPG for AI to Play

Hacker News - Mon, 02/09/2026 - 11:27am

Article URL: https://blog.langworth.com/spacemolt

Comments URL: https://news.ycombinator.com/item?id=46947113

Points: 1

# Comments: 1

Categories: Hacker News

Pages