V1 Reporting APIs to Export API migration guide
End of life
The v1 Reporting API is being deprecated, and future development and support are focused on the Dataset Export API and the Issues REST API. Snyk recommends migrating to the new Export API.
This guide outlines the migration path for all routes in the legacy v1 Reporting API.
V1 Reporting legacy API to New Export API migration guide
The most significant change is the shift from a direct, synchronous request/response model (v1) to an asynchronous job-based model (Export API).
Endpoint Structure
Direct resource endpoints (/v1/reporting/issues)
Scoped job endpoints. The new API is focused on the Organization or Group scope, with Organization ID or Group ID as path parameters
Response Format
JSON in the response body
CSV or JSON files using a signed URL
Data Retrieval
Synchronous, immediate response
The Export API offers a robust, scalable solution for data export, providing results as CSV files stored securely and accessible using a self-signed link. It uses an asynchronous job pattern (Initiate, Check Status, Fetch Results) for large datasets, replacing direct, synchronous POST requests for large data volumes.
Rate Limit
70 requests per minute, per user
20 export POST requests per hour (status/results checks are unlimited)
Required Headers
Authorization
Authorization, Version (for example 2024-10-15)
New export workflow
To replace any v1 Reporting API call, follow these steps:
Initiate the Export:
Send a
POSTrequest with your filters to create an export job. You will receive anexport_id.Endpoint:
POST /groups/{group_id}/exportRequired Parameters: The body must include the filters (including at least one date filter: introduced or updated) and specify the dataset (issues or usage) in the request header.
Validate the Export Status:
Poll the status endpoint using the
export_iduntil the status isFINISHED.Endpoint:
GET /groups/{group_id}/jobs/export/{export_id}Statuses:
PENDING,STARTED,FINISHED,ERROR.
Fetch Results in a CSV:
After
FINISHED, fetch the results, which will return a self-signed URL to the CSV file.Endpoint:
GET /groups/{group_id}/export/{export_id}
Route-specific migration
The v1 routes for issues and Project counts now primarily map to the Export API's issues dataset, while test counts map to the usage dataset.
POST /v1/reporting/issues
Get list of issues in a timeframe
Dataset: issues
Use introduced (from/to) and/or updated (from/to) filters to replicate the timeframe logic.
POST /v1/reporting/issues/latest
Get list of latest issues
Dataset: issues
Use only the updated (from) filter with a recent date if you need issues updated since a specific point.
POST /v1/reporting/counts/issues
Get issue counts in a timeframe
Dataset: issues
Counting logic is now handled by processing the full exported CSV file. Use introduced and/or updated filters to replicate the timeframe.
POST /v1/reporting/counts/issues/latest
Get latest issue counts
Dataset: issues
Counting logic is now handled by processing the full exported CSV file.
POST /v1/reporting/counts/projects
Get project counts in a timeframe
Dataset: issues
Count logic is now handled by processing the full exported CSV file. Use project_public_id or project_name columns in the CSV for project identification.
POST /v1/reporting/counts/projects/latest
Get latest project counts
Dataset: issues
Count logic is now handled by processing the full exported CSV file.
POST /v1/reporting/counts/tests
Get test counts in a timeframe
Dataset: usage
Counting logic is now handled by processing the full exported CSV file, filtering by interaction_type: "Scan done". Use updated (from/to) to manage the timeframe.
Important considerations for count endpoints
The v1 issue count endpoints (/v1/reporting/counts/issues and /latest) relied on a legacy data model and do not accurately reflect the reality of all Snyk products.
Key issue:
The legacy v1 issue count endpoints do not include issues from Snyk Code, and do not include issues generated by the most recent Snyk Infrastructure as Code (IaC) engines.
Action: Migrating to the Export API with the issues dataset will provide a more complete and accurate count of all issues across all Snyk products, including IaC. Any count migration should be aware that the new number will likely be higher and more representative of your total issue landscape.
Filter parameter migration
The Export API uses filters in the initial POST /export request body.
updated (from and to)
issues, usage
The date and time of the last update that affected any attribute in the dataset. Required (or use introduced).
introduced (from and to)
issues
The date and time when the issue was introduced. Required (or use updated).
orgs
issues, usage
Snyk Organization ID(s). Only available for the Group endpoints.
environment
issues
The environment of the Project (for example, external).
lifecycle
issues
The lifecycle of the Project (for example, production).
product_name
issues
Name of the Snyk product that produced the issue (for example, Snyk IaC).
project_type
issues
The scanning method used for the Project (for example, npm, maven). Case-sensitive value.
project_tags
issues
Project tags as a key:value pair. Case-sensitive value.
For any v1 filter not listed above (for example, severity, fixable, languages), the new process is to download the full CSV and apply the necessary filtering logic in your application or data warehouse using the corresponding column.
severity
issue_severity
languages
project_type
ignored, isFixed, status
issue_status
fixable, isUpgradable, etc.
computed_fixability
projects
project_public_id
CSV to JSON Conversion Tool
Since the Export API returns data as CSV, and the legacy API returned JSON, you will likely need to convert the exported file for structured consumption by your application.
Native JSON output
We are prioritizing adding the ability to export data directly in JSON format. It will be available early November. This feature will simplify data consumption for users migrating from the v1 API.
Current workaround script
Provided is an example of Python script csv_to_json_tool.py that automates downloading the CSV from the signed URL, converting it to JSON, and optionally formatting it using the command-line tool jq.
Prerequisites for the script:
Python: The script requires Python 3.x.
Libraries: requests (pip install requests).
Alternative: Issues REST API for transactional data
While the Export API is the direct replacement for v1 reporting, the new Issues REST API is available for developers needing real-time, transactional access to issues (as JSON responses) for integration purposes, rather than bulk CSV reports.
/rest/orgs/{org_id}/issues
GET
Get a paginated list of all issues for an Organization.
/groups/{group_id}/issues
GET
Get a paginated list of all issues for a Group.
/rest/orgs/{org_id}/issues/{issue_id}
GET
Get details for a single issue.
/orgs/{org_id}/packages/{purl}/issues
GET
Query issues for a specific package version identified by Package URL (purl). (Returns direct vulnerabilities only.)
/orgs/{org_id}/packages/issues
POST
Query issues for a batch of packages by purl (not available to all customers).
These endpoints support various query parameters for filtering (for example effective_severity_level, type, status, date ranges like updated_after), use cursor-based pagination (starting_after, ending_before), and return data in a structured JSON format.
Last updated
Was this helpful?

