How to Use It
The Dilution dataset is designed to be used as a signal source, a risk filter, and a lifecycle tracker within systematic equity workflows.
Each record introduces new information into the market at a specific point in time — the filing date — and resolves forward as the filing either becomes effective or is withdrawn.
1. Event-Driven Signal Generation
The primary use case is identifying new dilution risk as it enters the market.
Common approaches include:
Flagging newly filed S-1s labeled as dilutive
Conditioning exposure immediately following the filing date
Grouping filings into short-biased baskets
Avoiding long exposure in names with active dilution risk
Because filings are captured at the moment they are filed, this dataset is well-suited for event-based backtests and live monitoring.
2. Lifecycle-Aware Trading
Unlike static event datasets, dilution risk evolves.
This dataset allows you to:
Track how long filings take to become effective
Study performance differences between filings that resolve quickly vs slowly
Separate false positives (withdrawn filings) from completed dilution events
Analyze post-effectiveness behavior
Fields such as became_effective, effective_date, offering_withdrawn, and days_to_effective enable lifecycle-aware strategies rather than single-day reactions.
3. Risk Filtering & Portfolio Construction
The dataset can also be used defensively.
Examples include:
Excluding names with active dilutive filings from long universes
Adjusting position sizing based on dilution magnitude (
shares_offeredvs market cap)Conditioning factor portfolios to avoid structural headwinds
Screening small-cap universes for persistent dilution behavior
Because market capitalization is measured prior to filing, these filters can be applied without look-ahead bias.
4. Cross-Sectional Research
Beyond trading, the dataset supports broader research questions, such as:
How often dilutive filings are withdrawn
Typical time-to-effectiveness distributions
Differences between resale and primary offerings
Structural dilution patterns by market cap cohort
These analyses can inform both strategy design and risk management.
5. Practical Query Patterns
Typical workflows include:
Pulling the most recent filings across all tickers
Querying a single ticker’s dilution history
Scanning a rolling date window for new events
Monitoring unresolved filings over time
The API is stateless and composable, making it easy to integrate into scheduled jobs, research notebooks, or live trading pipelines.
What This Dataset Is — and Is Not
It does identify structural dilution risk at the moment it appears
It does not predict price direction on its own
It is best used as an input into broader systematic frameworks
Used correctly, it provides clarity around one of the most persistent sources of equity underperformance.
Last updated