Feed aggregator
I should have loved biology too
Article URL: https://nehalslearnings.substack.com/p/i-should-have-loved-biology-too
Comments URL: https://news.ycombinator.com/item?id=43764076
Points: 8
# Comments: 0
WD Launches HDD Recycling Process That Reclaims Rare Earth Elements
JEP 513: Flexible Constructor Bodies: Final for JDK25
Article URL: https://openjdk.org/jeps/513
Comments URL: https://news.ycombinator.com/item?id=43764063
Points: 1
# Comments: 0
Google will keep cookies and skip opt-out option in Chrome
Article URL: https://privacysandbox.com/news/privacy-sandbox-next-steps/
Comments URL: https://news.ycombinator.com/item?id=43764062
Points: 3
# Comments: 0
Show HN: I replaced my devs with AI agents – and it worked
I run a small AI company in Luxembourg. We started out as a consulting studio, building custom tools for clients — mostly boring things like dashboards, reporting modules, and CRUD backends.
At some point I realized we were building the same things over and over again. Not in a copy-paste way, but in a “we could generate 80% of this” kind of way. So last year, I ran a live-fire experiment: I asked Claude 3.5 and DeepSeek to build a small admin panel, with tests and API docs, from a plain-language spec.
The result: not great, but usable. It gave us the idea to stop typing code altogether.
Now, at Easylab AI, we don’t write code manually anymore. We use a stack of LLM-powered agents (Claude, DeepSeek, GPT-4) with structured task roles: • an orchestrator agent breaks down the spec • one agent builds back-end logic • another generates test coverage • another checks for security risks • another synthesizes OpenAPI docs • and humans only intervene for review & deployment
Agents talk via a shared context layer we built, and we introduced our own protocol (we call it MCP — Model Context Protocol) to define context flow and fallback behavior.
It’s not perfect. Agents hallucinate. Chaining multiple models can fail in weird ways. Debugging LLM logic isn’t always fun. But…
We’re faster. We ship more. Our team spends more time on logic and less on syntax. And the devs? They’re still here — but they’ve become prompt architects, QA strategists, and AI trainers.
We built Linkeme.ai entirely this way — an AI SaaS for generating social media content for SMEs. It would’ve taken us 3 months before. It took 3 weeks.
Happy to share more details if anyone’s curious. AMA.
Comments URL: https://news.ycombinator.com/item?id=43764039
Points: 1
# Comments: 1
Hackers target Apple users in an 'sophisticated attack'
Article URL: https://www.csoonline.com/article/3964668/hackers-target-apple-users-in-an-extremely-sophisticated-attack.html
Comments URL: https://news.ycombinator.com/item?id=43764032
Points: 2
# Comments: 0
A Call for Constructive Engagement
Article URL: https://www.aacu.org/newsroom/a-call-for-constructive-engagement
Comments URL: https://news.ycombinator.com/item?id=43764022
Points: 1
# Comments: 0
AI hallucinations lead to a new cyber threat: Slopsquatting
Article URL: https://www.csoonline.com/article/3961304/ai-hallucinations-lead-to-new-cyber-threat-slopsquatting.html
Comments URL: https://news.ycombinator.com/item?id=43764010
Points: 3
# Comments: 1
Surprise! Xbox Game Pass Subscribers Can Play Oblivion Remastered Now
La Liga Soccer Livestream: How to Watch Barcelona vs. RCD Mallorca From Anywhere
Streaming on Max: The 27 Absolute Best Movies to Watch
Apple Removes 'Available Now' Claim from Intelligence Page Following NAD Review
Cyber ‘agony aunts’ Amelia Hewitt and Rebecca Taylor are launching a book aimed at empowering women in their cyber security careers
IT security is now a metric in the Microsoft employee appraisal process
Premier League Soccer: Stream Man City vs. Aston Villa Live From Anywhere
After a Week of Playing Overwatch 2 Stadium Early, It Might Be My New Favorite Mode
Best Internet Providers in Hoover, Alabama
Earth Day Challenge: Test Your Recycling IQ
Show HN: CodeAnt AI – AI Code Reviewer, that understand code and dependencies
Over the last year, we’ve been building CodeAnt AI, working closely with engineering teams struggling with code review quality and speed.
Manual code reviews are slow and repetitive. Reviews today mostly look at what changed — not what the change actually impacts. With more AI-written code, it's getting worse: bigger PRs, faster cycles, less team context.
We wanted to rethink how code reviews are done: → Build structured knowledge of the codebase → Understand infra and dependency changes → Analyze blast radius automatically at PR time
What CodeAnt AI Does (Technical Overview)
Repository Indexing and Graph Building:
When a repo is added, we index the entire codebase and build Abstract Syntax Trees (ASTs).
We map upstream and downstream dependencies across files, functions, types, and modules.
We run custom lightweight language servers for multiple languages to support:
go_to_definition to find symbol declarations
find_all_references to locate usage points
fetch_signatures and fetch_types for richer semantic context
Pull Request Analysis:
When a PR is created:
We detect the diff.
We pull relevant upstream/downstream context for any changed symbols.
We gather connected function definitions, usage sites, interfaces, and infra files touched.
The LLM invokes the language servers (almost like a developer navigating manually) to reason over this structured context, not just the raw diff.
Code Quality Analysis:
Along with AI reasoning, we layer traditional static checks inside PRs:
Detecting duplicate code patterns
Finding dead, unused code blocks
Flagging overly complex functions
Goal: Make linting + AI suggestions seamless, without needing separate tools.
Security and Infrastructure Context:
We maintain an internal curated database of application security issues, mapped to OWASP and CWE.
We run Infrastructure-as-Code (IaC) security checks across:
Terraform, Kubernetes, Docker, CloudFormation, Ansible
You can optionally connect cloud accounts (AWS, GCP, Azure):
We scan your live cloud infra for misconfigurations
We pull cloud resource context into PRs (e.g., when a Terraform PR changes a live VPC rule, we show the potential blast radius).
We monitor End-of-Life (EOL) libraries and third-party package vulnerabilities by scanning the National Vulnerability Database (NVD) every 20 minutes and flagging at PR time.
In short: We try to automate how an experienced developer would actually review a change: → Understand the code structure → Understand where it’s used → Understand how infra/cloud gets affected → Catch quality, security, and complexity issues before merge — without needing extra dashboards or tools.
Teams using CodeAnt AI have reported 50%+ faster code reviews while finding deeper and more actionable problems earlier.
Would love feedback from the HN community — both technical and critical are welcome.
Thanks for checking it out!
Comments URL: https://news.ycombinator.com/item?id=43763633
Points: 1
# Comments: 0