Before diving in, check out Quickstart to understand how Neo plans and executes workflows.
Core Principles
1. Decouple Your Strategy Based on Your Goal
Neo excels at both open-ended exploration and precise execution. Choose your prompting style based on what you need: Exploratory (broad prompts): Use when you want Neo to think like an attacker and discover unknowns. Targeted (specific prompts): Use when you have defined objectives, known assets, or compliance requirements.2. Specify Your Organization’s Context
Every organization has different risk tolerances, severity definitions, and escalation procedures. Neo cannot assume these. Be explicit about:- Severity prioritization: What makes something critical vs. high vs. medium in your environment?
- Escalation triggers: When should Neo alert immediately vs. log for review?
- Testing boundaries: What is in scope? What is off-limits?
3. Define Your Reporting Requirements
Neo generates detailed findings, but the format and structure should match your needs. Specify:- Report format (Markdown, Jira tickets, JSON)
- Required sections (executive summary, technical details, remediation steps)
- Evidence requirements (screenshots, request/response pairs, reproduction steps)
4. Decouple Integration Tasks from Security Tasks
Neo is exceptional at security work but integrations can have their own complexities. Before running security tests that depend on third-party tools or APIs:- First, ensure the integration works correctly
- Then, prompt Neo with your security task
5. Document Unique Authentication Flows
If your application has non-standard authentication (SSO, MFA, custom tokens), guide Neo through the process step by step. Let it learn and document the flow for future sessions.6. Embrace Iterative Refinement
Complex tasks rarely succeed perfectly on the first attempt. Expect to iterate with Neo, especially for:- Authentication flows: Getting login, SSO, or MFA working may require a few rounds of feedback
- Custom integrations: API connections or third-party tools may need troubleshooting
- Novel attack surfaces: Unusual applications benefit from incremental exploration
Prompt Examples
Reconnaissance and Discovery
Good Approach “Map the external attack surface for example.com. Discover all subdomains, identify exposed services, and enumerate technologies. Focus on finding admin panels, development endpoints, and configuration files. Save findings to Files and flag anything with default credentials or sensitive data exposure.” Why This Works:- Clear scope (example.com and its subdomains)
- Specific discovery objectives
- Defined success criteria (admin panels, dev endpoints, config files)
- Output instructions (save to Files, flag specific issues)
Bad Approach “Find vulnerabilities on our website.” Why This Fails:
- No target specified
- No scope boundaries
- No prioritization guidance
- No output format defined
Vulnerability Assessment
Good Approach “Test the REST API at https://api.staging.example.com/v2 for authorization vulnerabilities. Use the test credentials in environment variables. Focus on:- Horizontal privilege escalation between user accounts
- Vertical escalation from user to admin roles
- IDOR on /users/ and /orders/ endpoints
- Explicit target and endpoint scope
- Authentication method specified
- Prioritized test cases
- Organization-specific severity definitions
- Clear output and escalation instructions
Bad Approach “Check our API for security issues.” Why This Fails:
- No API endpoint specified
- No authentication context
- No focus areas
- No severity or reporting guidance
Code Review
Good Approach “Review PR #892 in the payments-service repo. This change modifies the checkout flow and adds a new discount feature. Focus on:- Input validation for discount codes
- Race conditions in discount application
- Price manipulation vectors
- Authorization checks for applying discounts
- Specific PR and repository
- Context about what the change does
- Targeted security checks relevant to the feature
- Reference to existing standards
- Clear output format (PR comments)
Bad Approach “Make sure PR #892 is secure.” Why This Fails:
- No context about the change
- No specific security concerns to check
- No reference to coding standards
- No guidance on output format
Penetration Testing
Good Approach “Perform authenticated penetration testing on https://app.staging.example.com. Log in using the test account credentials from environment variables. Test the following user flows:- User registration and profile management
- File upload in the documents section
- Payment processing in checkout
- Severity (using CVSS 3.1)
- Reproduction steps
- Request/response evidence
- Business impact
- Remediation guidance
- Specific target environment
- Authentication method defined
- Scoped to specific features
- Clear evidence and reporting requirements
- Defined output format and purpose
Bad Approach “Pentest our staging environment.” Why This Fails:
- No URL or scope
- No authentication context
- No priority areas
- No reporting requirements
Integration-Dependent Tasks
Good Approach (Two-Step Process) Step 1: “Connect to our Jira instance using the API token in environment variables. Verify you can read from the SEC project and create issues. List the available issue types and custom fields for security findings.” Step 2: “Now that Jira integration is confirmed working, scan https://api.example.com for injection vulnerabilities. For each confirmed finding, create a Jira ticket in the SEC project with severity, evidence, and remediation steps. Use the ‘Security Bug’ issue type.” Why This Works:- Validates integration before depending on it
- Isolates integration issues from security work
- Clear expectations for each step
Bad Approach “Scan our API for vulnerabilities and create Jira tickets for everything you find.” Why This Fails:
- Assumes Jira integration works
- No verification step
- If tickets fail to create, unclear if it is an integration issue or a Neo issue
Custom Authentication
Good Approach (Iterative Process) Start by helping Neo learn the authentication flow: Iteration 1: “Our staging app uses SSO with SAML. Try logging in at https://staging.example.com/login by clicking ‘Sign in with SSO’. Use username [email protected]. Tell me what you see at each step.” Neo attempts login and reports what it encounters (e.g., “I see an MFA prompt asking for a 6-digit code”) Iteration 2: “Good. For the MFA step, use the TOTP secret stored in environment variable AUTH_TOTP_SECRET to generate the code. Complete the login and confirm you reach the dashboard.” Neo completes login successfully Iteration 3: “Perfect. Document this authentication flow and save it to Files/auth/staging-sso-flow.md so we can reuse it. Now test the authenticated user dashboard for XSS and CSRF vulnerabilities.” Why This Works:- Breaks authentication into manageable steps
- Allows Neo to report what it sees before proceeding
- Enables course correction if something unexpected happens
- Creates reusable documentation once the flow is working
- Proceeds to security testing only after auth is confirmed
Bad Approach “Log into our app and test it.” Why This Fails:
- No login URL
- No credentials
- No handling of SSO or MFA
- No specific test objectives
- No opportunity to iterate if login fails
Severity and Escalation
Good Approach “Scan https://api.example.com for OWASP Top 10 vulnerabilities. Apply our severity classification:- Critical: RCE, SQL injection with data access, authentication bypass
- High: Stored XSS, IDOR with PII exposure, privilege escalation
- Medium: Reflected XSS, information disclosure, missing security headers
- Low: Verbose errors, version disclosure
- Organization-specific severity definitions
- Clear escalation paths for each level
- Different handling based on severity
Bad Approach “Find critical vulnerabilities and let me know.” Why This Fails:
- No definition of “critical”
- No escalation procedure
- No handling for non-critical findings
Reporting and Output
Good Approach “After completing the security assessment of https://app.example.com, generate a report with:- Executive summary (2-3 paragraphs, non-technical)
- Risk overview table (finding, severity, status)
- Detailed findings with:
- Description and impact
- CVSS score and vector
- Evidence (screenshots, requests)
- Step-by-step reproduction
- Remediation recommendations
- Appendix with all raw HTTP requests
- Structured report requirements
- Audience-appropriate sections
- Specific evidence requirements
- Multiple output formats for different uses
Bad Approach “Send me a report when you’re done.” Why This Fails:
- No structure specified
- No audience context
- No format preference
- No file location
Iterative Task Refinement
Good Approach (Feedback Loop) Initial prompt: “Scan the GraphQL API at https://api.example.com/graphql for introspection and authorization issues.” Neo runs the scan but reports: “Introspection is disabled. I cannot enumerate the schema.” Follow-up: “The schema is available in Files/api/schema.graphql. Use that to understand the available queries and mutations, then test authorization on user-related operations.” Neo tests but finds: “All queries return 401. The API requires a session token.” Follow-up: “Use the session token stored in environment variable API_SESSION_TOKEN in the Authorization header. Retry the authorization tests.” Neo completes testing successfully with findings Why This Works:- Responds to Neo’s feedback rather than assuming success
- Provides additional context when roadblocks appear
- Keeps the task moving forward incrementally
- Results in a working approach that can be documented for next time
Bad Approach “Test the GraphQL API for vulnerabilities. Let me know when you’re done.” Why This Fails:
- No opportunity to provide feedback if Neo hits a blocker
- Assumes everything will work on the first try
- May result in incomplete testing without knowing why
- Misses the chance to build reusable context
Quick Reference
| Scenario | Good Pattern | Bad Pattern |
|---|---|---|
| Exploration | ”Map attack surface for X, focus on Y, flag Z" | "Find vulnerabilities” |
| Targeted testing | ”Test endpoint X for vulnerability types A, B, C" | "Check if it’s secure” |
| Code review | ”Review PR #X, focus on Y changes, check for Z" | "Is this PR safe?” |
| With integrations | Step 1: Verify integration. Step 2: Run task | ”Do X and send to Y” |
| Custom auth | Guide step-by-step, iterate on each step | ”Log in and test” |
| Reporting | Specify structure, format, and destination | ”Give me a report” |
| When stuck | Provide feedback, adjust, and retry | Assume it will work or give up |
Summary
The best prompts for Neo share common traits:- Specific scope: Define what is in and out of bounds
- Clear objectives: State what you want to find or validate
- Context: Share your severity definitions, standards, and constraints
- Output requirements: Specify format, structure, and destination
- Decoupled steps: Separate integration verification from security work
- Authentication guidance: Provide step-by-step instructions for complex auth flows
- Iterative mindset: Expect to refine and provide feedback, especially for complex tasks
Complex tasks often require iteration. When Neo encounters a blocker or produces unexpected results, provide feedback and adjust. Each iteration builds context that makes future tasks smoother.

