Hacker News

Subscribe to Hacker News feed
Hacker News RSS
Updated: 25 min 59 sec ago

Show HN: CloudVac – open-source AWS resource cleaner with cost insights

Tue, 02/10/2026 - 9:51am

CloudVac is a local-only tool to scan, inspect, estimate costs for, and clean up unused AWS resources across multiple profiles and regions.

It reads your `~/.aws/credentials` and discovers resources across services like EC2, RDS, Lambda, S3, CloudFormation stacks, and more. It estimates monthly cost per resource, flags orphaned log groups and under-utilized items, and offers a dependency-aware deletion plan with dry-run safety by default. Everything runs locally with no telemetry and no external calls.

I built this because managing unused AWS resources over many profiles/regions gets expensive and error-prone. Feedback, suggestions, and issues are welcome.

https://github.com/realadeel/CloudVac

Comments URL: https://news.ycombinator.com/item?id=46960430

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Good Egg: Trust Scoring for GitHub PR Authors

Tue, 02/10/2026 - 9:50am

I'm Jeff Smith. I've been contributing to AI in open source for a long time, across the Spark, Elixir, and PyTorch ecosystems. I've seen firsthand how open source can be a great place for people to collaborate and build AI together. I even wrote a book about it: https://www.manning.com/books/machine-learning-systems with all open source code: https://github.com/jeffreyksmithjr/reactive-machine-learning...

But the challenges are real. AI-generated code slop and low-quality submissions are flooding projects. Contribution volume is up; signal-to-noise is down. Maintainers can no longer assume a PR represents genuine investment.

Good Egg is a tool I built to help. It mines a contributor's merged PR history across the GitHub ecosystem and computes a trust score relative to your project. The core idea: good contributors are already exhibiting good behavior -- merged PRs in established repos, sustained contributions over time, work across multiple projects. That track record is a strong signal, and it already exists in the GitHub API.

How it works:

- Builds a bipartite contribution graph (users ↔ repositories) from merged PRs - Applies personalized graph scoring biased toward your project and language ecosystem - Accounts for recency decay, repository quality (stars, language normalization), and anti-gaming measures (self-contribution penalties, per-repo caps) - Classifies contributors as HIGH / MEDIUM / LOW / UNKNOWN / BOT

The methodology doc goes into the full detail: https://github.com/2ndSetAI/good-egg/blob/main/docs/methodol...

Runs four ways:

- GitHub Action: drop it into any PR workflow and get a comment with the score - CLI: good-egg score --repo - Python library: await score_pr_author(login, repo_owner, repo_name, token) - MCP server: plug it into Claude or other AI assistants

On Vouch and the circle-of-trust approach:

Mitchell Hashimoto's Vouch takes a different angle: maintainers manually vouch for contributors they trust, building a web-of-trust. I think that's a valid approach and have seen circles of trust work well (on PyTorch specifically, where contributors came from all over, including major corporate partners). But I've also seen gaps that could easily be filled by a bit of data that already exists. Vouch requires active maintainer participation in a separate system and has a cold-start problem. Good Egg is complementary. It's automated, doesn't ask maintainers to do extra work, and works from day one on any repo.

What it doesn't do:

Good Egg doesn't send data to any remote service. It reads from the GitHub API, computes locally, and that's it. I'm not building a training set or a contributor database. This is just a tool for the community.

Configuration and extensibility:

Scoring parameters (thresholds, graph weights, recency decay, language multipliers) are all configurable via YAML or environment variables. More extensibility is planned, particularly around additional data sources (e.g., GitLab) and methodology variations like graph-based project relatedness and incorporating review/issue activity alongside PRs.

Code: https://github.com/2ndSetAI/good-egg PyPI: pip install good-egg Docs: https://github.com/2ndSetAI/good-egg/tree/main/docs

Comments URL: https://news.ycombinator.com/item?id=46960412

Points: 2

# Comments: 0

Categories: Hacker News

Ask HN: How to find joy in writing/learning about tech in this AI world?

Tue, 02/10/2026 - 9:50am

Looking to hear from fellow HN'ers who have found a way out of this downspell.

I've written code practically every day for 40 years, some of it for livelihood, but mostly because it gave me immense joy. I don't have much public codebases to show for it; I wrote code like an artist doodles in their spare time.

But lately, I am feeling lost. I find that this impulse to learn new things and write code has completely vanished with the new AI LLM regime. Things that I strove to learn and build slowly can be accomplished with ease. It is very possible that my aims were very modest and that my skills were ripe for getting automated.

I'd like to get out of this lull, but I simply can't find the motivation to dig into agentic AI and churn out stuff, like an old-school woodworker told to learn CAD and let the machine handle the nitty gritty.

Of course, I can continue to do what I used to do earlier, since I am neither interested in money nor fame. But one thing that I _think_ I had at the back of my mind in my earlier life was to internalize tiny 'katas' (patterns) and form insights that I imagined I could teach to someone. I find that I can no longer imagine that "someone", since everyone I meet is more interested in AI delivering the end product rather than going through the process and paying their dues.

Apologies for the rambling, and grateful in advance for suggestions.

Comments URL: https://news.ycombinator.com/item?id=46960408

Points: 2

# Comments: 0

Categories: Hacker News

AI Isn't Dangerous. Evaluation Structures Are.

Tue, 02/10/2026 - 9:46am

I wrote a long analysis about why AI behavior may depend less on model ethics and more on the environment it is placed in — especially evaluation structures (likes, rankings, immediate feedback) versus relationship structures (long-term interaction, correction loops).

The article uses the Moltbook case as a structural example and discusses environment alignment, privilege separation, and system design implications for AI safety.

Full article: https://medium.com/@clover.s/ai-isnt-dangerous-putting-ai-inside-an-evaluation-structure-is-644ccd4fb2f3

Comments URL: https://news.ycombinator.com/item?id=46960352

Points: 2

# Comments: 1

Categories: Hacker News

Rented Virtue

Tue, 02/10/2026 - 9:44am
Categories: Hacker News

Constraint Propagation for Fun

Tue, 02/10/2026 - 9:43am
Categories: Hacker News

Show HN: Minespheres – a Minesweeper-like game with a twist

Tue, 02/10/2026 - 9:41am

I like meditative puzzle games like Minesweeper, and decided to make my interpretation of that game, but with a spherical board where each board tile is determined by a Voronoi algorithm.

To make the game more challenging, beside the number of tiles and density of mines, I added several ways to place seed points on a sphere, where the default Fibonacci method gives the most regular tile shapes, and Random and Chaotic give progressively irregular tiles.

Hope you enjoy this small game as much as I do. Works on iPhone as a PWA, too (couldn't test on Android, though — let me know if there are any issues).

Comments URL: https://news.ycombinator.com/item?id=46960286

Points: 1

# Comments: 0

Categories: Hacker News

Pages