Skip to main content
Memory keeps the AI grounded in your working style and stored knowledge. Working memory preserves your preferences, verification patterns, and project history. To augment this, Memory uses strategic retrieval as a tool to pull information from your stored Files (artifacts, docs, and findings) with embedding-based recall and citations. These are not rigid rules; they are weighted heuristics and referenced materials that make the AI behave like an extension of your team.

Working memory

  • What it preserves: preferred confirmation steps (e.g., SSRF, DOM XSS, IDOR), when to escalate or retry, concise vs. verbose output style, minimum proof standards, workflow biases (which tools to try first and with which parameters), project status and progress across tasks, and correlations between data, files, and past decisions
  • Runtime use:
    1. Planning: injects weighted preferences into the task plan (tool selection, ordering, fallbacks)
    2. Execution: tunes prompts for sub‑steps (for example, verify with an independent payload before reporting)
    3. Evidence: raises the bar on proof where you have set stricter standards
    4. Reporting: formats outputs consistent with your style guidelines
    5. Context recall: surfaces project history, prior task outcomes, and links related files and findings to current work
Examples
  • Format all vulnerability reports with severity, CVSS score, affected endpoints, reproduction steps, and remediation guidance in structured Markdown tables.
  • Treat 4xx during auth probing as soft‑fail, retry with rotated headers before escalating.
  • When verifying IDOR, always include one negative control and capture both request/response pairs.
  • Prefer httpx with -tls-grab during recon; fall back to curl only if needed.
  • Recall that the checkout feature was tested last quarter; surface prior findings and link the original PoC from Files when revisiting.
  • Track that API endpoint enumeration is in progress across multiple tasks; correlate new discoveries with earlier coverage maps.

Retrieval

Retrieval is the strategic tool that Neo uses to pull information from your stored Files and chat history. When Working memory needs context, Retrieval surfaces the right artifacts, docs, findings, and previous conversations with citations.
  • What you can store: past findings and confirmation steps, bypass techniques and exploit variants, logs and traces, PoCs, write‑ups, architecture docs and API references, ad‑hoc notes and exports (CSV, JSON, Markdown, and more), plus entire chat history from previous sessions
  • Ways to add files: manual upload, auto‑capture from AI runs, or programmatic integrations from your pipelines
  • How it works:
    1. Storage: all files, artifacts, and data are stored securely in your workspace
    2. Embedding: content is chunked and converted to vector embeddings with metadata (source, filename, timestamps) for traceability
    3. Runtime retrieval: when the Agent Swarm encounters a specific app, endpoint, or situation, Retrieval automatically recalls previous related data by searching embeddings; hybrid retrieval (semantic plus keyword) pulls relevant excerpts with citations during planning and answering
Examples
  • When testing an endpoint previously scanned, Retrieval recalls prior HARs, response patterns, and PoCs with citations.
  • During API enumeration, related OpenAPI specs and past payload variants are retrieved automatically.
  • Reviewing a feature surfaces linked architecture docs, prior findings, and confirmation steps from earlier runs.
  • Auto-captured screenshots and logs are retrieved when the Agent Swarm revisits a UI flow or error condition.
  • Previous chat history is recalled when resuming a project, surfacing earlier decisions, questions asked, and context from past sessions.