Feed aggregator
A16Z-backed super PAC is targeting Alex Bores
Any public labeled dataset of (customer question → seasoned sales response)?
I’m looking for a dataset where the input is a customer question/objection (ideally with some product + context), and the label is the response a seasoned salesperson would give to sell that product.
Not customer support, not generic persuasion—specifically sales responses grounded in product facts, with a “move the deal forward” intent.
Bonus if it includes metadata like product category, persona, stage (discovery/close), and outcomes (conversion, next step, etc.).
Does anything like this exist publicly? If not, what’s the closest proxy people use (call transcripts, negotiation corpora, synthetic generation, etc.)?
Comments URL: https://news.ycombinator.com/item?id=46956042
Points: 1
# Comments: 0
Can you rewire your brain?
Article URL: https://aeon.co/essays/what-the-metaphor-of-rewiring-gets-wrong-about-neuroplasticity
Comments URL: https://news.ycombinator.com/item?id=46956021
Points: 1
# Comments: 1
Ask HN: How much did you spend on AI last month?
Show us your receipts! I think it would be enlightening to know what it costs to use these toys - er - tools. Whatever you are building. If you are using AI, tell us what it cost you.
I have spent $0 on AI.
Comments URL: https://news.ycombinator.com/item?id=46956019
Points: 1
# Comments: 3
The world is suffering from a shortage of tenors
Article URL: https://www.economist.com/culture/2026/02/09/the-world-is-suffering-from-a-shortage-of-tenors
Comments URL: https://news.ycombinator.com/item?id=46956016
Points: 1
# Comments: 1
Show HN: Self-Healing AI Agents with Claude Code as Doctor
I built a 4-tier self-healing system for OpenClaw (AI agent platform running on my Mac Mini 24/7). The interesting part is Level 3: when health checks fail repeatedly, the system spawns Claude Code in a tmux PTY session to autonomously diagnose and repair issues.
Recovery escalation: - Level 0-1: LaunchAgent KeepAlive + Watchdog - Level 2: Automated "doctor --fix" (config validation, port checks) - Level 3: Claude Code spawns in tmux, reads logs, attempts repairs - Level 4: Discord alert if all automation fails
Production-tested in my homelab over 3 months: 99% recovery rate, recovery time reduced from 45min → 3min avg. Handled 17 consecutive crashes, config corruption, port conflicts.
Built for macOS (stable) with Linux systemd support (beta). MIT licensed.
Curious what others think about AI-powered infrastructure self-healing.
Comments URL: https://news.ycombinator.com/item?id=46956003
Points: 3
# Comments: 0
Show HN: Lacune, Go test coverage TUI
I’ve been using Zed for a while and missed inline code coverage visualization. Since this extension doesn’t seem to be coming anytime soon, I built Lacune, a TUI for tracking uncovered code in real time.
Comments URL: https://news.ycombinator.com/item?id=46956000
Points: 1
# Comments: 0
MCP Knife: A CLI Swiss Army Knife for MCP Servers
Article URL: https://vivekhaldar.com/articles/mcp-knife-cli-swiss-army-knife-for-mcp-servers/
Comments URL: https://news.ycombinator.com/item?id=46955876
Points: 1
# Comments: 0
US plans Big Tech carve-out from next wave of chip tariffs
Article URL: https://www.ft.com/content/e6f7f69a-2552-45f5-ae4c-6f1135e5cde1
Comments URL: https://news.ycombinator.com/item?id=46955869
Points: 1
# Comments: 0
Show HN: MCP Orchestrator – Spawn parallel AI sub-agents from one prompt
I built an open-source MCP server (TypeScript/Node.js) that lets you spawn up to 10 parallel sub-agents using Copilot CLI or Claude Code CLI.
Key features: - Context passing to each agent (full file, summary, or grep mode) - Smart timeout selection based on MCP servers requested - Cross-platform (macOS, Linux, Windows) - Headless & programmatic — designed for AI-to-AI orchestration
Example: give one prompt like "research job openings at Stripe, Google, and Meta" — the orchestrator fans it out to 3 parallel agents, each with their own MCP servers (e.g., Playwright for browser), and aggregates results.
Install: npm i @ask149/mcp-orchestrator
This is a solo side project. Would love feedback on: - What CLI backends to support next (Aider, Open Interpreter, local LLM CLIs?) - Ideas for improving the context-passing system - What MCP server integrations would be most useful
PRs and issues welcome — check CONTRIBUTING.md in the repo.
Comments URL: https://news.ycombinator.com/item?id=46955848
Points: 2
# Comments: 0
Show HN: Agx – A Kanban board that runs your AI coding agents
agx is a kanban board where each card is a task that AI agents actually execute.
agx new "Add rate limiting to the API" That creates a card. Drag it to "In Progress" and an agent picks it up. It works through stages — planning, coding, QA, PR — and you watch it move across the board.
The technical problems this solves:
The naive approach to agent persistence is replaying conversation history. It works until it doesn't:
1. Prompt blowup. 50 iterations in, you're stuffing 100k tokens just to resume. Costs explode. Context windows overflow.
2. Tangled concerns. State, execution, and orchestration mixed together. Crash mid-task? Good luck figuring out where you were.
3. Black box execution. No way to inspect what the agent decided or why it's stuck.
agx uses clean separation instead:
- Control plane (PostgreSQL + pg-boss): task state, stage transitions, job queue
- Data plane (CLI + providers): actual execution, isolated per task
- Artifact storage (filesystem): prompts, outputs, decisions as readable files
Agents checkpoint after every iteration. Resuming loads state from the database, not by replaying chat. A 100-iteration task resumes at the same cost as a 5-iteration one.
What you get: - Constant-cost resume, no context stuffing
- Crash recovery: agent wakes up exactly where it left off
- Full observability: query the DB, read the files, tail the logs
- Provider agnostic: Claude Code, Gemini, Ollama all work
Everything runs locally. PostgreSQL auto-starts via Docker. The dashboard is bundled with the CLI.
Comments URL: https://news.ycombinator.com/item?id=46955833
Points: 2
# Comments: 0
OpenClaw Partners with VirusTotal for Skill Security
Article URL: https://openclaw.ai/blog/virustotal-partnership
Comments URL: https://news.ycombinator.com/item?id=46955832
Points: 2
# Comments: 0
Players discover that World of Warcraft is powered by invisible bunnies
Why Every Business Must Engage with AI – and How to Do It Right
Title: Why every business should engage with AI (the real question is how deep)
AI is no longer an experimental technology. It’s becoming a baseline capability for modern businesses. The real question most teams should be asking is not “should we use AI?” but “how deeply should we engage with it?”
I’ve talked to many founders, CTOs, and operators over the past couple of years. The hesitation around AI usually comes from two places:
Teams that haven’t really tried AI and feel comfortable sticking with existing workflows.
Teams that rushed into AI, spent money, got disappointing results, and walked away.
Both often conclude: “AI isn’t for us.” That conclusion is understandable — but increasingly risky.
Many organizations still rely on manual or semi-manual processes: document handling, internal knowledge search, reporting, customer support triage. Everything appears to “work,” but it’s slow, hard to scale, and dependent on headcount rather than leverage.
AI isn’t magic, but it is a force multiplier. Ignoring it means accepting structural inefficiency while competitors gradually improve speed, quality, and decision-making.
One misconception I see a lot: that engaging with AI means building custom models or hiring a large ML team. In practice, AI today is closer to what spreadsheets or search once were — general-purpose tools that most teams can benefit from without deep specialization.
Instead of treating AI adoption as a yes/no decision, it’s more useful to think in levels.
Level 1: AI literacy Every company should be here. This is about enabling people, not systems: using tools like ChatGPT for research, drafting, summarization, and analysis; teaching teams how to verify outputs; and setting clear rules around sensitive data. Low risk, high return.
Level 2: AI-assisted workflows Here AI becomes part of everyday processes without replacing humans. Examples include internal AI assistants over documentation, AI-supported customer support, content generation, or analytics help. This is where many teams see the best ROI with relatively low complexity.
Level 3: AI-driven systems At this level, AI is embedded into products or core operations: RAG systems, agent workflows, forecasting, personalization. This requires clean data, evaluation, and operational discipline. Many failures happen here not because AI doesn’t work, but because teams skip the earlier foundations.
The biggest risk isn’t “doing AI wrong.” It’s not building AI fluency at all while the rest of the market moves forward.
Once AI systems are in production, new problems appear: cost control, reliability, hallucinations, latency, silent regressions. At that point, AI stops being a demo and becomes infrastructure.
For teams already dealing with production AI systems, we’ve been thinking a lot about observability and reliability in this space. Some of that work is shared here: https://optyxstack.com/ai
Curious how others on HN think about the “depth” question when it comes to AI adoption.
Comments URL: https://news.ycombinator.com/item?id=46955823
Points: 1
# Comments: 0
Show HN: PicoClaw – lightweight OpenClaw-style AI bot in one Go binary
I’m building PicoClaw: a lightweight OpenClaw-style personal AI bot that runs as a single Go binary. OpenClaw (Moltbot / Clawdbot) is a great product. I wanted something with a simpler, more “single-binary” architecture that’s easy to read and hack on.
Repo: https://github.com/mosaxiv/picoclaw
Comments URL: https://news.ycombinator.com/item?id=46955793
Points: 2
# Comments: 0
Flood Fill vs. The Magic Circle
Article URL: https://www.robinsloan.com/winter-garden/magic-circle/
Comments URL: https://news.ycombinator.com/item?id=46955772
Points: 1
# Comments: 0
Show HN: A CLI tool to automate Git workflows using AI agents
Hi HN,
I built a CLI tool to automate common git workflows using AI agents (e.g. creating branches, summarizing context, and preparing PRs).
Supported platforms: - GitHub (via gh) - GitLab (via glab)
Supported AI agents: - Claude Code - Gemini CLI - Cursor Agent - Codex CLI
Design goals: - Agent-agnostic (same commands across different AI agents) - No MCP or custom prompts required - Minimal setup (from install to first PR in minutes)
Repo: https://github.com/leochiu-a/git-pr-ai
Feedback and questions welcome.
Comments URL: https://news.ycombinator.com/item?id=46955761
Points: 2
# Comments: 0
Use AI to find movies and TV shows on your streaming services
Article URL: https://pickalready.com
Comments URL: https://news.ycombinator.com/item?id=46955757
Points: 2
# Comments: 0
Spec driven development doesn't work if you're too confused to write the spec
GenAI Go SDK for AI
Article URL: https://50984e11.maruel-ca.pages.dev/post/genai-v0.1.0/
Comments URL: https://news.ycombinator.com/item?id=46955737
Points: 1
# Comments: 0
