Feed aggregator

Prepper Disk

Hacker News - Thu, 04/24/2025 - 12:16pm

Article URL: https://www.prepperdisk.com/

Comments URL: https://news.ycombinator.com/item?id=43784475

Points: 1

# Comments: 1

Categories: Hacker News

Ask HN: How do you retain both technical and domain knowledge long-term?

Hacker News - Thu, 04/24/2025 - 12:14pm

I'm exploring a learning system that addresses the dual challenge many of us face: remembering both technical concepts AND the business domain knowledge needed to apply them effectively. After years of coding in different industries, I've noticed that understanding the domain (finance, healthcare, e-commerce, etc.) is often as challenging as mastering the technical stack, yet most learning tools focus solely on the technical side. Some questions I'm curious about:

How do you currently capture and retain domain-specific knowledge alongside technical concepts? What's your biggest challenge when onboarding to a new codebase with an unfamiliar business domain? Have you tried using flash cards or spaced repetition for either technical or domain knowledge? What worked or didn't? Would you find value in a tool that could help teams build shared mental models of both their tech stack and business domain? How do you currently transfer domain knowledge between team members?

I'm in early validation stages and would appreciate your insights before building anything. If there's enough interest, I'll share what I learn from this thread.

Comments URL: https://news.ycombinator.com/item?id=43784449

Points: 2

# Comments: 0

Categories: Hacker News

Not for private gain – An open letter

Hacker News - Thu, 04/24/2025 - 12:12pm

Article URL: https://notforprivategain.org/

Comments URL: https://news.ycombinator.com/item?id=43784434

Points: 1

# Comments: 0

Categories: Hacker News

The EdTech Chicken and Egg Problem

Hacker News - Thu, 04/24/2025 - 12:11pm

I've worked in edtech for almost 10 years now in B2B, B2C, and nonprofit contexts. I've seen real product market fit, and a lot of poor product market fit.

Edtech has been one of the largest tech disappointments of the internet era. The internet has transformed everything about how people learn. I always joke that Youtube is actually the best edtech product. And now, I guess chatGPT and other LLMs. But these products have a lot of problems, specifically around accuracy, pedagogy and lack of assessment. (Research shows low-stakes assessment is when the moment of learning often happens.)

Within the "Ed tech space", a lot of products have failed in my view. The best product I built was free online science simulations (virtual labs).

I've worked on products that were financially successful but its debatable if they helped helped users learn much.

Edtech companies that sell to parents are making a product for parents. The goal is often to make parents feel good about the choices they are making for their kids. For example, give your kids an ipad with Educational games, and now you're a better parent.

Edtech products that sell to business are making a product for employers. Much of these products end up being about tracking employees rather than real skill development.

The reason making a product for educators ends up being more effective in terms of learning outcomes is because most teachers have their incentives aligned - they want their students to learn more and be able to apply that learning.

Which leads me to this chicken and egg problem - because education is a system, technology either has to fit into that system or break the system. Breaking the system can be costly and have lots of undesirable side effects. I imagine this is a lot like healthcare / healthtech - you can't just move fast and break things.

Adoption of products in EdTech (via educators) is more involved than pure B2C but less profitable than B2B, making it costly and painful.

From both a product/context and business model perspective, it's hard. This is partly why I think the non profit model has worked the best in education (Khan academy, Phet, etc). Without having to optimize for profit, you have the freedom to build products that fit better into the existing system. You can serve people who can't afford to pay you, nor do they have the power to convince their administrations to pay you.

However - I still think we haven't done enough - what is the next step?

I think if someone asked me where the next 2B in edtech funding should go, I would suggest highly specialized nonprofits each with a focused goal like teaching meaningful reading skill at the late elementary level or getting kids excited about math at the middle school level. Focus these nonprofits to have educator obsession - the educators trying to solve these problems in the real world.

Ultimately, for real outcomes, all these products need to be free or sponsored. I do think paid products selling to school districts work (these businesses do exist) but this adds a lot of friction that slows product development down, and of course, mucks up the incentives. These paid products often want strong moats - so they lock districts into multi-year contracts and then stop improving the product. They generate metrics administrators like, with products educators are forced to use but aren't improving. Nonprofits have a magical freedom to be "moat-less."

Comments URL: https://news.ycombinator.com/item?id=43784414

Points: 1

# Comments: 0

Categories: Hacker News

PopeGPT

Hacker News - Thu, 04/24/2025 - 12:07pm

Article URL: https://popegpt.com

Comments URL: https://news.ycombinator.com/item?id=43784379

Points: 1

# Comments: 0

Categories: Hacker News

Best Bluetooth Speaker for 2025

CNET Feed - Thu, 04/24/2025 - 12:02pm
Looking to find the best Bluetooth speaker for your money? CNET's experts have found the top options for every budget based on sound quality, size, durability and battery life.
Categories: CNET

Jericho Security Gets $15 Million for AI-Powered Awareness Training

Security Week - Thu, 04/24/2025 - 12:01pm

Jericho Security has raised $15 million in Series A funding for its AI-powered employee cybersecurity training platform.

The post Jericho Security Gets $15 Million for AI-Powered Awareness Training appeared first on SecurityWeek.

Categories: SecurityWeek

The Ultimate Star Wars Guide: How to Watch Movies, TV Shows and Canon Stories in Order

CNET Feed - Thu, 04/24/2025 - 12:00pm
From A New Hope to Andor, here are our tips for binge-watching the Jedi- and Sith-filled universe.
Categories: CNET

New whitepaper outlines the taxonomy of failure modes in AI agents

Microsoft Malware Protection Center - Thu, 04/24/2025 - 12:00pm

We are releasing a taxonomy of failure modes in AI agents to help security professionals and machine learning engineers think through how AI systems can fail and design them with safety and security in mind.

The taxonomy continues Microsoft AI Red Team’s work to lead the creation of systematization of failure modes in AI; in 2019, we published one of the earliest industry efforts enumerating the failure modes of traditional AI systems. In 2020, we partnered with MITRE and 11 other organizations to codify the security failures in AI systems as Adversarial ML Threat Matrix, which has now evolved into MITRE ATLAS™. This effort is another step in helping the industry think through what the safety and security failures in the fast-moving and highly impactful agentic AI space are.

Taxonomy of Failure Mode in Agentic AI Systems

Microsoft's new whitepaper explains the taxonomy of failure modes in AI agents, aimed at enhancing safety and security in AI systems.

Read the whitepaper

To build out this taxonomy and ensure that it was grounded in concrete and realistic failures and risk, the Microsoft AI Red Team took a three-prong approach:

  • We catalogued the failures in agentic systems based on Microsoft’s internal red teaming of our own agent-based AI systems.
  • Next, we worked with stakeholders across the company—Microsoft Research, Microsoft AI, Azure Research, Microsoft Security Response Center, Office of Responsible AI, Office of the Chief Technology Officer, other Security Research teams, and several organizations within Microsoft that are building agents to vet and refine this taxonomy.
  • To make this useful to those outside of Microsoft, we conducted systematic interviews with external practitioners working on developing agentic AI systems and frameworks to polish the taxonomy further.

To help frame this taxonomy in a real-world application for readers, we also provide a case study of the taxonomy in action. We take a common agentic AI feature of memory and we walk through how an cyberattacker could corrupt an agent’s memory and use that as a pivot point to exfiltrate data.

Figure 1. Failure modes in agentic AI systems.

Core concepts in the taxonomy

While identifying and categorizing the different failure modes, we broke them down across two pillars, safety and security.

  • Security failures are those that result in core security impacts, namely a loss of confidentiality, availability, or integrity of the agentic AI system; for example, such a failure allowing a threat actor to alter the intent of the system.
  • Safety failure modes are those that affect the responsible implementation of AI, often resulting in harm to the users or society at large; for example, a failure that causes the system to provide differing quality of service to different users without explicit instructions to do so.

We then mapped the failures along two axes—novel and existing.

  1. Novel failure modes are unique to agentic AI and have not been observed in non-agentic generative AI systems, such as failures that occur in the communication flow between agents within a multiagent system.
  2. Existing failure modes have been observed in other AI systems, such as bias or hallucinations, but gain in importance in agentic AI systems due to their impact or likelihood.

As well as identifying the failure modes, we have also identified the effects these failures could have on the systems they appear in and the users of them. Additionally we identified key practices and controls that those building agentic AI systems should consider to mitigate the risks posed by these failure modes, including architectural approaches, technical controls, and user design approaches that build upon Microsoft’s experience in securing software as well as generative AI systems.

The taxonomy provides multiple insights for engineers and security professionals. For instance, we found that memory poisoning is particularly insidious in AI agents, with the absence of robust semantic analysis and contextual validation mechanisms allows malicious instructions to be stored, recalled, and executed. The taxonomy provides multiple strategies to combat this, such as limiting the agent’s ability to autonomously store memories by requiring external authentication or validation for all memory updates, limiting which components of the system have access to the memory, and controlling the structure and format of items stored in memory.

Read the new “Taxonomy of Failure Mode in Agentic AI Systems” whitepaper How to use this taxonomy
  1. For engineers building agentic systems:
    • We recommend that this taxonomy is used as part of designing the agent, augmenting the existing Security Development Lifecycle and threat modeling practice. The guide helps walk through the different harms and the potential impact.
    • For each harm category, we provide suggested mitigation strategies that are technology agnostic to kickstart the process.
  2. For security and safety professionals:
    • This is a guide on how to probe AI systems for failures before the system launches. It can be used to generate concrete attack kill chains to emulate real world cyberattackers.
    • This taxonomy can also be used to help inform defensive strategies for your agentic AI systems, including providing inspiration for detection and response opportunities.
  3. For enterprise governance and risk professionals, this guide can help provide an overview of not just the novel ways these systems can fail but also how these systems inherit the traditional and existing failure modes of AI systems.
Learn more

Like all taxonomies, we consider this a first iteration and hope to continually update it, as we see the agent technology and cyberthreat landscape change. If you would like to contribute, please reach out to airt-agentsafety@microsoft.com.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The taxonomy was led by Pete Bryan; the case study on poisoning memory was led by Giorgio Severi. Others that contributed to this work: Joris de Gruyter, Daniel Jones, Blake Bullwinkel, Amanda Minnich, Shiven Chawla, Gary Lopez, Martin Pouliot,  Whitney Maxwell, Katherine Pratt, Saphir Qi, Nina Chikanov, Roman Lutz, Raja Sekhar Rao Dheekonda, Bolor-Erdene Jagdagdorj, Eugenia Kim, Justin Song, Keegan Hines, Daniel Jones, Richard Lundeen, Sam Vaughan, Victoria Westerhoff, Yonatan Zunger, Chang Kawaguchi, Mark Russinovich, Ram Shankar Siva Kumar.

The post New whitepaper outlines the taxonomy of failure modes in AI agents appeared first on Microsoft Security Blog.

Categories: Microsoft

Pages