Feed aggregator

Framework Desktop Review: Small and Mighty, but Shy of Upgrade Greatness

CNET Feed - Tue, 02/10/2026 - 9:01am
The Framework Desktop offers surprising performance for such a compact machine. However, it doesn't give you the upgradability you might expect.&
Categories: CNET

Backslash Raises $19 Million to Secure Vibe Coding

Security Week - Tue, 02/10/2026 - 9:01am

The company will use the investment to expand its R&D team and operations, deepen platform capabilities, and scale go-to-market presence.

The post Backslash Raises $19 Million to Secure Vibe Coding appeared first on SecurityWeek.

Categories: SecurityWeek

UAE artificial intelligence champion takes its sovereignty-first model to Southeast Asia

Computer Weekly Feed - Tue, 02/10/2026 - 8:57am
UAE artificial intelligence champion takes its sovereignty-first model to Southeast Asia
Categories: Computer Weekly

Epsteinomatic: Turn Your Memories into Crimes

Hacker News - Tue, 02/10/2026 - 8:54am

Article URL: https://epsteinomatic.com/

Comments URL: https://news.ycombinator.com/item?id=46959742

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Radiant – Radial Menu Launcher for macOS Inspired by Blender's Pie Menu

Hacker News - Tue, 02/10/2026 - 8:54am

Hi HN, I built Radiant.

I use Blender's Pie Menu a lot and like how spatial positioning turns into muscle memory — hold a key, move toward a direction, release. After a while you stop thinking about it. I wanted that same interaction model in Figma, VS Code, and the rest of macOS, so I built a system-wide version.

Radiant is a radial and list menu launcher for macOS. You organize actions into menus, trigger them with a hotkey, and pick by direction or position.

Some design decisions I'd be happy to discuss:

- 8 fixed slots per radial menu — a deliberate constraint for spatial memory. More slots = slower selection (Fitts's Law), fewer = not enough utility. List menus handle the "I need 20+ items" case. - Three close modes: release-to-confirm (Blender-style), click-to-confirm, and toggle (menu stays open for multiple actions) - App-specific profiles that auto-switch based on the frontmost application - Built-in macro system — chain keystrokes, delays, text input, and system actions without external tools

Technical details: - Native Swift/SwiftUI, no Electron - CGEventTap for global keyboard/mouse monitoring - Accessibility API for keystroke injection - All data stored locally in UserDefaults, no telemetry - JSON config with import/export for sharing presets

URL: https://radiantmenu.com

Would love to hear your thoughts.

Comments URL: https://news.ycombinator.com/item?id=46959736

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Shuffled - Daily word puzzle game

Hacker News - Tue, 02/10/2026 - 8:50am

Hi HN!

I built a word game last week called Shuffled. It's a daily puzzle where you drag letters around a grid to form the words before running out of moves. It's designed for quick play. Everyone gets the same set of puzzles.

I’d love any feedback on how the difficulty feels and any UX rough edges.

Comments URL: https://news.ycombinator.com/item?id=46959704

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: The Control and Memory Layer for AI Agents

Hacker News - Tue, 02/10/2026 - 8:50am

We launched OpenSink after building AI Agents and noticing some painful patterns: the agents are doing work and it ends up somewhere in Slack, Email, or gets lost. If you use multiple platforms, the agents are scattered around, you can't make changes to them without redeploying and you have low to no visibility regarding what they do. Here are the building blocks: Memory (via Sinks): persistent and searchable memory for agents, that survives restarts. Sessions: see what an agent did during a run, with a structured timeline. Input Requests: the agent asks for human input, waits for a response, and continues the execution. Used to build human-in-the-loop agents with low effort. Configurations: Easily tweak your agent configuration without redeploying the code.

Works with any AI agent platform, or custom code. Launching with docs, examples, and two open-source OpenClaw skills for Memory and Activities.

Website: https://opensink.com Docs: https://docs.opensink.com

Any feedback is valuable at this point, and thank you for reading so far ^^

Comments URL: https://news.ycombinator.com/item?id=46959703

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Vela – Modern programming language compiling to native code via LLVM

Hacker News - Tue, 02/10/2026 - 8:50am

Hello Hacker News,

I’m a computer science student, and I’m building my own programming language called Vela as an learning project.

I'm building Vela to better understand how programming languages work internally: lexer, parser, AST, type systems, and eventually LLVM compilation.

The language is in early stages but already has: >Custom syntax with type inference >Parser and basic interpreter >Pattern matching and pipeline operators >Roadmap for async/await and parallel execution

I'd especially love feedback on: >Language design decisions (syntax choices, features) >Code architecture (currently Python frontend + planned LLVM backend) >Type system implementation >What features to prioritize vs. what to cut

Current status: The fundamentals work, but there are definitely bugs and missing pieces. If you've built a language before, your advice would be invaluable. If anyone wants to contribute, review code, or just point out where I'm doing things wrong, it would help a lot.

Comments URL: https://news.ycombinator.com/item?id=46959698

Points: 1

# Comments: 0

Categories: Hacker News

How safe are kids using social media? We did the groundwork

Malware Bytes Security - Tue, 02/10/2026 - 8:50am

When researchers created an account for a child under 13 on Roblox, they expected heavy guardrails. Instead, they found that the platform’s search features still allowed kids to discover communities linked to fraud and other illicit activity.

The discoveries spotlight the question that lawmakers around the world are circling: how do you keep kids safe online?

Australia has already acted, while the UK, France, and Canada are actively debating tighter rules around children’s use of social media. This month US Senator Ted Cruz reintroduced a bill to do it while also chairing a Congressional hearing about online kid safety.

Lawmakers have said these efforts are to keep kids safe online. But as the regulatory tide rises, we wanted to understand what digital safety for children actually looks like in practice.

So, we asked a specialist research team to explore how well a dozen mainstream tech providers are protecting children aged under 13 online.

We found that most services work well when kids use the accounts and settings designed for them. But when children are curious, use the wrong account type, or step outside those boundaries, things can go sideways quickly.

Over several weeks in December, the research team explored how platforms from Discord to YouTube handled children’s online use. They relied on standard user behavior rather than exploits or technical tricks to reflect what a child could realistically encounter.

The researchers focused on how platforms catered to kids through specific account types, how age restrictions were enforced in practice, and whether sensitive content was discoverable through normal browsing or search.

What emerged was a consistent pattern: curious kids who poke around a little, or who end up using the wrong account type, can run into inappropriate content with surprisingly little effort.

A detailed breakdown of the platforms tested, account types used, and where sensitive content was discovered appears in the research scope and methodology section at the end of this article.

When kids’ accounts are opt-in

One thing the team tried was to simply access the generic public version of a site rather than the kid-protected area.

This was a particular problem with YouTube. The company runs a kid-specific service called YouTube Kids, which the researchers said is effectively sanitized of inappropriate content (it sounds like things have changed since 2022).

The issue is that YouTube’s regular public site isn’t sanitized, and even though the company says you must be at least 13 to use the service unless ‘enabled’ by a parent, in reality anyone can access it. From the report:

“Some of the content will require signing in (for age verification) prior the viewing, but the minor can access the streaming service as a ‘Guest’ user without logging in, bypassing any filtering that would otherwise apply to a registered child account.”

That opens up a range of inappropriate material, from “how-to” fraud channels through to scenes of semi-nudity and sexually suggestive material, the researchers said. Horrifically, they even found scenes of human execution on the public site. The researchers concluded:

“The absence of a registration barrier on the public platform renders the ‘YouTube Kids’ protection opt-in rather than mandatory.”

When adult accounts are easy to fake

Another worry is that even when accounts are age-gated, enterprising minors can easily get around them. While most platforms require users to be 13+, a self-declaration is often enough. All that remains is for the child to register an email address with a service that doesn’t require age verification.

This “double blind” vulnerability is a big problem. Kids are good at creating accounts. The tech industry has taught them to be, because they need them for most things they touch online, from streaming to school.

When they do get past the age gates, curious kids can quickly get to inappropriate material. Researchers found unmoderated nudity and explicit material on the social network Discord, along with TikTok content providing credit card fraud and identity theft tutorials. A little searching on the streaming site Twitch surfaced ads for escort services.

This points to a trade-off between privacy and age verification. While stricter age verification could close some of these gaps, it requires collecting more personal data, including IDs or biometric information. That creates privacy risks of its own, especially for children. That’s why most platforms rely on self-declared age, but the research shows how easily that can be bypassed.

When kids’ accounts let toxic content through

Cracks in the moderation foundations allow risky content: Roblox, the website and app where users build their own content, filters chats for child accounts. However, it also features “Communities,” which are groups designed for socializing and discovery.

These groups are easily searchable, and some use names and terminology commonly linked to criminal activities, including fraud and identity theft. One, called “Fullz,” uses a term widely understood to refer to stolen personal information, and “new clothes” is often used to refer to a new batch of stolen payment card data. The visible community may serve as a gateway, while the actual coordination of illicit activity or data trading occurs via “inner chatter” between the community members.

This kind of search wasn’t just an issue for Roblox, warned the team. It found Instagram profiles promoting financial fraud and crypto schemes, even from a restricted teen account.

Some sites passed the team’s tests admirably, though. The researchers simulated underage users who’d bypassed age verification, but were unable to find any harmful content on Minecraft, Snapchat, Spotify, or Fortnite. Fortnite’s approach is especially strict, disabling chat and purchases on accounts for kids under 13 until a parent verifies via email. It also uses additional verification steps using a Social Security number or credit card. Kids can still play, but they’re muted.

What parents can do

There is no platform that can catch everything, especially when kids are curious. That makes parental involvement the most important layer of protection.

One reason this matters is a related risk worth acknowledging: adults attempting to reach children through social platforms. Even after Instagram took steps to limit contact between adult and child accounts, parents still discovered loopholes. This isn’t a failure of one platform so much as a reminder that no set of controls can replace awareness and involvement.

Mark Beare, GM of Consumer at Malwarebytes says:

“Parents are navigating a fast-moving digital world where offline consequences are quickly felt, be it spoofed accounts, deepfake content or lost funds. Safeguards exist and are encouraged, but children can still be exposed to harmful content.”

This doesn’t mean banning children from the internet. As the EFF points out, many minors use online services productively with the support and supervision of their parents. But it does mean being intentional about how accounts are set up, how children interact with others online, and how comfortable they feel asking for help.

Accounts and settings
  • Use child or teen accounts where available, and avoid defaulting to adult accounts.
  • Keep friends and followers lists set to private.
  • Avoid using real names, birthdays, or other identifying details unless they are strictly required.
  • Avoid facial recognition features for children’s accounts.
  • For teens, be aware of “spam” or secondary accounts they’ve set up that may have looser settings.
Social behavior
  • Talk to your child about who they interact with online and what kinds of conversations are appropriate.
  • Warn them about strangers in comments, group chats, and direct messages.
  • Encourage them to leave spaces that make them uncomfortable, even if they didn’t do anything wrong.
  • Remind them that not everyone online is who they claim to be.
Trust and communication
  • Keep conversations about online activity open and ongoing, not one-off warnings.
  • Make it clear that your child can come to you if something goes wrong without fear of punishment or blame.
  • Involve other trusted adults, such as parents, teachers, or caregivers, so kids aren’t navigating online spaces alone.

This kind of long-term involvement helps children make better decisions over time. It also reduces the risk that mistakes made today can follow them into the future, when personal information, images, or conversations could be reused in ways they never intended.

Research findings, scope and methodology 

This research examined how children under the age of 13 may be exposed to sensitive content when browsing mainstream media and gaming services. 

For this study, a “kid” was defined as an individual under 13, in line with the Children’s Online Privacy Protection Act (COPPA). Research was conducted between December 1 and December 17, 2025, using US-based accounts. 

The research relied exclusively on standard user behavior and passive observation. No exploits, hacks, or manipulative techniques were used to force access to data or content. 

Researchers tested a range of account types depending on what each platform offered, including dedicated child accounts, teen or restricted accounts, adult accounts created through age self-declaration, and, where applicable, public or guest access without registration. 

The study assessed how platforms enforced age requirements, how easy it was to misrepresent age during onboarding, and whether sensitive or illicit content could be discovered through normal browsing, searching, or exploration. 

Across all platforms tested, default algorithmic content and advertisements were initially benign and policy-compliant. Where sensitive content was found, it was accessed through intentional, curiosity-driven behavior rather than passive recommendations. No proactive outreach from other users was observed during the research period. 

The table below summarizes the platforms tested, the account types used, and whether sensitive content was discoverable during testing. 

Platform Account type tested Dedicated kid/teen account Age gate easy to bypass Illicit content discovered NotesYouTube (public) No registration (guest) Yes (YouTube Kids) N/A Yes Public YouTube allowed access to scam/fraud content and violent footage without sign-in. Age-restricted videos required login, but much content did not. YouTube Kids Kid account Yes N/A No Separate app with its own algorithmic wall. No harmful content surfaced. Roblox All-age account (13+) No Not required Yes Child accounts could search for and find communities linked to cybercrime and fraud-related keywords. Instagram Teen account (13–17) No Not required Yes Restricted accounts still surfaced profiles promoting fraud and cryptocurrency schemes via search. TikTok Younger user account (13+) Yes Not required No View-only experience with no free search. No harmful content surfaced. TikTok Adult account No Yes Yes Search surfaced credit card fraud–related profiles and tutorials after age gate bypass. Discord Adult account No Yes Yes Public servers surfaced explicit adult content when searched directly. No proactive contact observed. Twitch Adult account No Yes Yes Discovered escort service promotions and adult content, some behind paywalls. Fortnite Cabined (restricted) account (13+) Yes Hard to bypass No Chat and purchases disabled until parent verification. No harmful content found. Snapchat Adult account No Yes No No sensitive content surfaced during testing. Spotify Adult account Yes Yes No Explicit lyrics labeled. No harmful content found. Messenger Kids Kid account Yes Not required No Fully parent-controlled environment. No search or
external contacts. 
Screenshots from the research
  • List of Roblox communities with cybercrime-oriented keywords
  • Roblox community that offers chat without verification
  • Roblox community with cybercrime-oriented keywords
  • Graphic content on publicly accessible YouTube
  • Credit card fraud content on publicly accessible YouTube
  • Active escort page on Twitch
  • Stolen credit cards for sale on an Instagram teen account
  • Crypto investment scheme on an Instagram teen account
  • Carding for beginners content on a TikTok adult account, accessed by kids with a fake date of birth.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Categories: Malware Bytes

Show HN: Early detection of LLM hallucinations via structural dissonance

Hacker News - Tue, 02/10/2026 - 8:49am

Hi HN,

I've been exploring a different angle on hallucination detection.

Most approaches react after the fact — fact-checking, RAG, or token probabilities. But hallucinated outputs often show structural warning signs before semantic errors become obvious.

I built ONTOS, a research prototype that monitors structural coherence using IDI (Internal Dissonance Index).

ONTOS acts as an 'External Structural Sensor' for LLMs.

It is model-agnostic and non-invasive, designed to complement existing safety layers and alignment frameworks without needing access to internal weights or costly retraining.

Core idea: Track both local continuity (sentence-to-sentence) and global context drift, then detect acceleration of divergence between them in embedding space.

Analogy: Like noticing a piano performance becoming rhythmically unstable before wrong notes are played. Individual tokens may look fine, but the structural "tempo" is collapsing.

What's in the repo:

• Dual-scale monitoring: Local jumps vs global drift • Pre-crash detection: IDI triggers on acceleration, not just deviation • Black-box compatible: No access to model internals needed

Key limitations:

• Detects structural instability, not factual truth • Sentence-level demos (not token-level yet) • Research prototype, not production-ready

What I'd love feedback on:

• Does structural monitoring feel more robust than semantic similarity alone? • What edge cases where hallucinations are structurally perfect? • Fundamental blockers to using this as an external safety sensor?

GitHub: https://github.com/yubainu/SL-CRF

Critical feedback welcome — early-stage exploration.

Comments URL: https://news.ycombinator.com/item?id=46959695

Points: 1

# Comments: 1

Categories: Hacker News

America's $1T AI Gamble

Hacker News - Tue, 02/10/2026 - 8:48am
Categories: Hacker News

Show HN: Octrafic – AI agent for API testing from your terminal

Hacker News - Tue, 02/10/2026 - 8:47am

I built a CLI tool that acts as an AI agent for API testing. Think Claude Code, but for testing APIs – you describe what you want to test, and it autonomously generates test cases, runs them, and reports back. Written in Go, open source, no GUI. It fits into your existing terminal workflow. I was tired of manually writing and updating API tests, so I built something that handles that loop for me. GitHub: https://github.com/Octrafic/octrafic-cli

Feedback welcome.

Comments URL: https://news.ycombinator.com/item?id=46959665

Points: 1

# Comments: 0

Categories: Hacker News

Accelerando, but Janky

Hacker News - Tue, 02/10/2026 - 8:47am
Categories: Hacker News

Pages