While we work with Research to deploy/productionize a robust ML model that qualifies citations and references, we want to further investigate how we can augmented our parsed references.
**ToDo**
[ ] Attempt to match reference parsed with english perennial sources list
[ ] Investigate citation event stream api capabilities
[ ] How can we know if a cited URL is broken or goes to a 404 page?
**Acceptance Criteria**
[] report findings in a document to review
[] review with Francisco
===== Test Strategy =====
How I'm planning to test this? Can I do integration testing or this should be tested manually? Where this testing should be happening (what environment, local or dev)? What this can influence (what can be broken by this feature)?
**Checklist for testing**
- [ ] check if this thing is active
- [ ] check if another thing is active
===== Things to consider: =====
* Will this work need new alarms/monitoring added.
* Do we need to update any documentation (playbook or public documentation for this work)
* etc.
===== Description (optional) =====
If possible add information on how this story/epic will contribute to the bigger picture or how it relates to the milestones being aimed for.