Skip to main content
Neo is a security-first AI agent. It can handle broad, exploratory goals or precise, targeted tasks, but the quality of your instructions directly impacts the quality of your results. This guide shows you what works and what does not when prompting Neo for security work.
Before diving in, check out Quickstart to understand how Neo plans and executes workflows.

Core Principles

1. Decouple Your Strategy Based on Your Goal

Neo excels at both open-ended exploration and precise execution. Choose your prompting style based on what you need: Exploratory (broad prompts): Use when you want Neo to think like an attacker and discover unknowns. Targeted (specific prompts): Use when you have defined objectives, known assets, or compliance requirements.
Start broad when mapping unknown attack surfaces. Get specific when validating known issues or running repeatable tests.

2. Specify Your Organization’s Context

Every organization has different risk tolerances, severity definitions, and escalation procedures. Neo cannot assume these. Be explicit about:
  • Severity prioritization: What makes something critical vs. high vs. medium in your environment?
  • Escalation triggers: When should Neo alert immediately vs. log for review?
  • Testing boundaries: What is in scope? What is off-limits?

3. Define Your Reporting Requirements

Neo generates detailed findings, but the format and structure should match your needs. Specify:
  • Report format (Markdown, Jira tickets, JSON)
  • Required sections (executive summary, technical details, remediation steps)
  • Evidence requirements (screenshots, request/response pairs, reproduction steps)

4. Decouple Integration Tasks from Security Tasks

Neo is exceptional at security work but integrations can have their own complexities. Before running security tests that depend on third-party tools or APIs:
  1. First, ensure the integration works correctly
  2. Then, prompt Neo with your security task
This prevents conflating integration issues with security findings.

5. Document Unique Authentication Flows

If your application has non-standard authentication (SSO, MFA, custom tokens), guide Neo through the process step by step. Let it learn and document the flow for future sessions.

6. Embrace Iterative Refinement

Complex tasks rarely succeed perfectly on the first attempt. Expect to iterate with Neo, especially for:
  • Authentication flows: Getting login, SSO, or MFA working may require a few rounds of feedback
  • Custom integrations: API connections or third-party tools may need troubleshooting
  • Novel attack surfaces: Unusual applications benefit from incremental exploration
This is normal and expected. Treat Neo like a team member: provide feedback, correct misunderstandings, and refine the approach together. Each iteration makes Neo smarter about your environment.
When something does not work as expected, tell Neo what went wrong and what you observed. This feedback helps it adjust and try a different approach.

Prompt Examples

Reconnaissance and Discovery

Good Approach “Map the external attack surface for example.com. Discover all subdomains, identify exposed services, and enumerate technologies. Focus on finding admin panels, development endpoints, and configuration files. Save findings to Files and flag anything with default credentials or sensitive data exposure.” Why This Works:
  • Clear scope (example.com and its subdomains)
  • Specific discovery objectives
  • Defined success criteria (admin panels, dev endpoints, config files)
  • Output instructions (save to Files, flag specific issues)

Bad Approach “Find vulnerabilities on our website.” Why This Fails:
  • No target specified
  • No scope boundaries
  • No prioritization guidance
  • No output format defined

Vulnerability Assessment

Good Approach “Test the REST API at https://api.staging.example.com/v2 for authorization vulnerabilities. Use the test credentials in environment variables. Focus on:
  1. Horizontal privilege escalation between user accounts
  2. Vertical escalation from user to admin roles
  3. IDOR on /users/ and /orders/ endpoints
Our severity scale: Critical = data breach potential, High = privilege escalation, Medium = information disclosure. Create Jira tickets for Critical and High findings immediately.” Why This Works:
  • Explicit target and endpoint scope
  • Authentication method specified
  • Prioritized test cases
  • Organization-specific severity definitions
  • Clear output and escalation instructions

Bad Approach “Check our API for security issues.” Why This Fails:
  • No API endpoint specified
  • No authentication context
  • No focus areas
  • No severity or reporting guidance

Code Review

Good Approach “Review PR #892 in the payments-service repo. This change modifies the checkout flow and adds a new discount feature. Focus on:
  1. Input validation for discount codes
  2. Race conditions in discount application
  3. Price manipulation vectors
  4. Authorization checks for applying discounts
Reference our secure coding guidelines in Files/standards/secure-coding.md. Post findings as PR comments with severity and remediation suggestions.” Why This Works:
  • Specific PR and repository
  • Context about what the change does
  • Targeted security checks relevant to the feature
  • Reference to existing standards
  • Clear output format (PR comments)

Bad Approach “Make sure PR #892 is secure.” Why This Fails:
  • No context about the change
  • No specific security concerns to check
  • No reference to coding standards
  • No guidance on output format

Penetration Testing

Good Approach “Perform authenticated penetration testing on https://app.staging.example.com. Log in using the test account credentials from environment variables. Test the following user flows:
  1. User registration and profile management
  2. File upload in the documents section
  3. Payment processing in checkout
For each finding, provide:
  • Severity (using CVSS 3.1)
  • Reproduction steps
  • Request/response evidence
  • Business impact
  • Remediation guidance
Generate a Markdown report suitable for our quarterly security review.” Why This Works:
  • Specific target environment
  • Authentication method defined
  • Scoped to specific features
  • Clear evidence and reporting requirements
  • Defined output format and purpose

Bad Approach “Pentest our staging environment.” Why This Fails:
  • No URL or scope
  • No authentication context
  • No priority areas
  • No reporting requirements

Integration-Dependent Tasks

Good Approach (Two-Step Process) Step 1: “Connect to our Jira instance using the API token in environment variables. Verify you can read from the SEC project and create issues. List the available issue types and custom fields for security findings.” Step 2: “Now that Jira integration is confirmed working, scan https://api.example.com for injection vulnerabilities. For each confirmed finding, create a Jira ticket in the SEC project with severity, evidence, and remediation steps. Use the ‘Security Bug’ issue type.” Why This Works:
  • Validates integration before depending on it
  • Isolates integration issues from security work
  • Clear expectations for each step

Bad Approach “Scan our API for vulnerabilities and create Jira tickets for everything you find.” Why This Fails:
  • Assumes Jira integration works
  • No verification step
  • If tickets fail to create, unclear if it is an integration issue or a Neo issue

Custom Authentication

Good Approach (Iterative Process) Start by helping Neo learn the authentication flow: Iteration 1: “Our staging app uses SSO with SAML. Try logging in at https://staging.example.com/login by clicking ‘Sign in with SSO’. Use username [email protected]. Tell me what you see at each step.” Neo attempts login and reports what it encounters (e.g., “I see an MFA prompt asking for a 6-digit code”) Iteration 2: “Good. For the MFA step, use the TOTP secret stored in environment variable AUTH_TOTP_SECRET to generate the code. Complete the login and confirm you reach the dashboard.” Neo completes login successfully Iteration 3: “Perfect. Document this authentication flow and save it to Files/auth/staging-sso-flow.md so we can reuse it. Now test the authenticated user dashboard for XSS and CSRF vulnerabilities.” Why This Works:
  • Breaks authentication into manageable steps
  • Allows Neo to report what it sees before proceeding
  • Enables course correction if something unexpected happens
  • Creates reusable documentation once the flow is working
  • Proceeds to security testing only after auth is confirmed

Bad Approach “Log into our app and test it.” Why This Fails:
  • No login URL
  • No credentials
  • No handling of SSO or MFA
  • No specific test objectives
  • No opportunity to iterate if login fails

Severity and Escalation

Good Approach “Scan https://api.example.com for OWASP Top 10 vulnerabilities. Apply our severity classification:
  • Critical: RCE, SQL injection with data access, authentication bypass
  • High: Stored XSS, IDOR with PII exposure, privilege escalation
  • Medium: Reflected XSS, information disclosure, missing security headers
  • Low: Verbose errors, version disclosure
For Critical findings: immediately send a Slack alert to #security-incidents. For High findings: create a Jira ticket with P1 priority. For Medium/Low: include in the final report only.” Why This Works:
  • Organization-specific severity definitions
  • Clear escalation paths for each level
  • Different handling based on severity

Bad Approach “Find critical vulnerabilities and let me know.” Why This Fails:
  • No definition of “critical”
  • No escalation procedure
  • No handling for non-critical findings

Reporting and Output

Good Approach “After completing the security assessment of https://app.example.com, generate a report with:
  1. Executive summary (2-3 paragraphs, non-technical)
  2. Risk overview table (finding, severity, status)
  3. Detailed findings with:
    • Description and impact
    • CVSS score and vector
    • Evidence (screenshots, requests)
    • Step-by-step reproduction
    • Remediation recommendations
  4. Appendix with all raw HTTP requests
Export as Markdown to Files/reports/q1-2025-assessment.md and also create individual Jira tickets for tracking.” Why This Works:
  • Structured report requirements
  • Audience-appropriate sections
  • Specific evidence requirements
  • Multiple output formats for different uses

Bad Approach “Send me a report when you’re done.” Why This Fails:
  • No structure specified
  • No audience context
  • No format preference
  • No file location

Iterative Task Refinement

Good Approach (Feedback Loop) Initial prompt: “Scan the GraphQL API at https://api.example.com/graphql for introspection and authorization issues.” Neo runs the scan but reports: “Introspection is disabled. I cannot enumerate the schema.” Follow-up: “The schema is available in Files/api/schema.graphql. Use that to understand the available queries and mutations, then test authorization on user-related operations.” Neo tests but finds: “All queries return 401. The API requires a session token.” Follow-up: “Use the session token stored in environment variable API_SESSION_TOKEN in the Authorization header. Retry the authorization tests.” Neo completes testing successfully with findings Why This Works:
  • Responds to Neo’s feedback rather than assuming success
  • Provides additional context when roadblocks appear
  • Keeps the task moving forward incrementally
  • Results in a working approach that can be documented for next time

Bad Approach “Test the GraphQL API for vulnerabilities. Let me know when you’re done.” Why This Fails:
  • No opportunity to provide feedback if Neo hits a blocker
  • Assumes everything will work on the first try
  • May result in incomplete testing without knowing why
  • Misses the chance to build reusable context

Quick Reference

ScenarioGood PatternBad Pattern
Exploration”Map attack surface for X, focus on Y, flag Z""Find vulnerabilities”
Targeted testing”Test endpoint X for vulnerability types A, B, C""Check if it’s secure”
Code review”Review PR #X, focus on Y changes, check for Z""Is this PR safe?”
With integrationsStep 1: Verify integration. Step 2: Run task”Do X and send to Y”
Custom authGuide step-by-step, iterate on each step”Log in and test”
ReportingSpecify structure, format, and destination”Give me a report”
When stuckProvide feedback, adjust, and retryAssume it will work or give up

Summary

The best prompts for Neo share common traits:
  1. Specific scope: Define what is in and out of bounds
  2. Clear objectives: State what you want to find or validate
  3. Context: Share your severity definitions, standards, and constraints
  4. Output requirements: Specify format, structure, and destination
  5. Decoupled steps: Separate integration verification from security work
  6. Authentication guidance: Provide step-by-step instructions for complex auth flows
  7. Iterative mindset: Expect to refine and provide feedback, especially for complex tasks
Neo adapts to your level of specificity. Broad prompts enable creative exploration. Specific prompts enable precise, repeatable execution. Match your prompt style to your goal, and provide the context Neo needs to work like a member of your security team.
Complex tasks often require iteration. When Neo encounters a blocker or produces unexpected results, provide feedback and adjust. Each iteration builds context that makes future tasks smoother.