⭐ Cannon
Cannon is an idea for a project attempting to compute canonical/normalised URLs and extract some information from them ('entities'), merely by looking at the URL, and ideally without using the "rel=canonical" metadata.
I describe the problem it tries to solve here: "urls are broken", also see "motivation".
At the moment it's a subproject of promnesia: see cannon.py
and tests/cannon.py.
If anyone knows of similar efforts/prior art, please let me know! I'd really like to avoid reinvening the wheel here.
Table of Contents
- related cannon
- [A] * motivation cannon
- [A] I want urls that represent information, regardless the way it's presented cannon
- [B] why not use "rel=canonical" metadata field? cannon
- [B] Google no longer providing original URL in AMP for image search results cannon
- [B] mobile versions of sites sometimes have different "canonical", e.g. mobile.twitter.com cannon
- [C] archive.org is messing with canonical cannon
- [D] e.g. this link doesn't have 'canonical' even though it's a mirror: https://solar.lowtechmagazine.com/2016/11/the-curse-of-the-modern-office.html cannon
- [D] no canonical on gist https://gist.github.com/dneto/2258454 cannon
- [B] parent and sibling relations can be determined from the URL cannonpromnesia
- [B] if the original page is gone I can still easily link my saved annotations (Instapaper/Pocket/Hypothesis) to archived page cannon
- [B] urls a good candidate to determine 'entities' because they sure at least somewhat curated cannon
- [C] normalization is tricky.. for some urls, stuff after # is important https://en.wikipedia.org/wiki/Tendon#cite_note-14 . for some, it's utter garbage cannon
- DONE [C] The Problem With URLs https://blog.codinghorror.com/the-problem-with-urls/ cannon
- [C] motivation: siloing: instapaper 'imports' pages and assigns an id: https://www.instapaper.com/read/1265139707 cannon
- [C] could normalize historic URLs which are already down? cannonlinkrot
- [A] * projects that could benefit from it cannon
- STRT [B] Hmm could be helpful for hypothesis? cannonhypothesis
- TODO [B] Ignore URL parameters - Feature Requests - Memex Community cannonworldbrain
- TODO [C] wonder if we could cooperate? cannonagora
- TODO [C] would be useful to use the same normalising engine for #archivebox for example? cannon
- TODO [C] could be useful for surfingkey/nyxt browser to hint 'interesting' urls? cannon
- STRT [C] archive.org cannonlinkrot
- TODO [C] if it's implemented as a helper extension/library, it could be useful for many other extensions cannon
- TODO [D] einaregilsson/Redirector: Browser extension (Firefox, Chrome, Opera, Edge) to redirect urls based on regex patterns, like a client side mod_rewrite cannon
- TODO [D] could reuse URL underlying etc with ampie? cannonampie
- [A] * prior art cannon
- TODO [A] ClearURLs / Addon: looks super super promising cannon
- cannon https://github.com/ClearURLs/Addon/wiki/Rules: Not super convinced JSON would work well in general, but anyway it's already pretty good.
- TODO [B] WorldBrain/memex-url-utils: Shared URL processing utilities for Memex extension and mobile apps. cannonworldbrain
- TODO [B] h/uri.py at 0fc8a0d345741d43b4f80856a7cbb8f5afa70f80 · hypothesis/h https://github.com/hypothesis/h/blob/0fc8a0d345741d43b4f80856a7cbb8f5afa70f80/h/util/uri.py cannonhypothesis
- cannonhypothesis excluded query params!
- cannonhypothesis right, I could probably reuse hypothesis's canonify and contribute back. looks very similar to mine
- TODO [B] coleifer/micawber: a small library for extracting rich content from urls cannon
- cannon ok, pretty interesting. it probably uses network, but could at least use it for testing (or maybe even 'enriching'?)
- TODO [C] sindresorhus/compare-urls: Compare URLs by first normalizing them cannon
- TODO [C] hypothesis:
h/normalize_uris_test.py
cannon - TODO [C] niksite/url-normalize: URL normalization for Python cannon
- TODO [C] john-kurkowski/tldextract: Accurately separate the TLD from the registered domain and subdomains of a URL, using the Public Suffix List. cannon
- TODO [C] rbaier/python-urltools: Some functions to parse and normalize URLs. cannon
- TODO [A] ClearURLs / Addon: looks super super promising cannon
- [B] * ideas cannon
- [B] maybe we can achieve 95% accuracy with generic rules and by handling the most popular websites cannon
- TODO [B] if 'children' relations can't be determined by substring matching, perhaps cannon should generate 'virtual' urls? cannonpromnesia
- TODO [B] a special service to resolve siloed links like t.co ? cannonlinkrot
- STRT [B] just specify admissible regexes for urls so it's easier to unify? cannon
- cannon also this to summarize
- STRT [B] rethinking the whole approach… cannon
- TODO [C] use shared JS/python tests for canonifying? cannonffipromnesia
- TODO [C] should be idempotent? cannon
- TODO [C] hmm, maybe the extension can learn normalisation ruls over time? by looking at canonical and refining the rules? cannon
- TODO [C] sample random links and their canonicals for testing cannon
- TODO [C] background thing that sucks in canonical urls and provides data for testing? cannonpromnesia
- TODO [C] how do we prune links that are potentially not secure to store? like certain URL parameters cannon
- TODO [D] need checks that url don't contain stupid shit like trailing colons etc cannon
- TODO [C] hmm could use this api for checking normalization? cannon
- [C] * testcases cannon
- [B] Wendover Productions - YouTube cannon
- [B] roam links cannon
- [B] https://app.element.io/#/room/#blockchain:fosdem.org cannon
- [B] A Relational Turn for Data Protection? by Neil M. Richards, Woodrow Hartzog :: SSRN cannon
- STRT [B] A Brief Intro to Topological Quantum Field Theories. - YouTube https://www.youtube.com/watch?v=59uLGIrkMxM&list=WL&index=61&t=0s cannon
- TODO [B] normalise DOI cannon
- TODO [C] m.wikipedia normalisation could also be useful for hypothesis? cannonhypothesis
- cannonhypothesis X.m.wikipedia.org
- cannonhypothesis mm, it's got canonical though..
- TODO perhaps promnesia should respond both to canonical and its own idea of normalised (preferring canonical) cannonhypothesis
- STRT [C] fragments: Aharonov-Bohm Experiment https://physicstravelguide.com/experiments/aharonov-bohm#tab__concrete cannon
- cannon here I guess it could yield url with hash + parent url?
- TODO always assume that parents in uri hierarchy are actual parents? I guess that's fairly reasonable cannon
- [C] stuff like this: youtu.be/1TKSfAkWWN0 cannon
- cannon this is also motivation for canonifying. this is a redirect link in tweet, and there is no way to associate it with canonical
- [C] https://hubs.mozilla.com/#/ cannon
- [C] Writing well | defmacro cannon
- [C] maybe https://youtu.be/zRxI0DaQrag?t=1380 ? cannon
- [C] github: https://twitter.com/i/web/status/928602151286386688 this end up trimmed with … :( cannon
- [C] github: https://twitter.com/i/web/status/1156086851633131520 cannon
- [C] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=941827 cannon
- [C] https://undeadly.org/cgi?action=article;sid=20170930133438 cannon
- TODO [C] hmm, server doesn't normalise properly?? (url escaping) cannon
- TODO [C] semiconductors video should be unified properly. well, or again hierarchical thing? might be too spammy for 'watch later' cannon
- [C] https://cstheory.stackexchange.com/questions/1920/examples-of-unrelated-mathematics-playing-a-fundamental-role-in-tcs/1925#1925: need parent link to trigger on this in cannon cannon
- [C] https://news.ycombinator.com/item?id=23537243#23540421 hmm, both id and # ? cannon
- [C] https://bugzilla.mozilla.org/show_bug.cgi?id=1411873 : ugh need to keep id cannon
- TODO [C] old.reddit and new reddit cannon
- [D] handle google.com/search cannon
- [D] https://www.c-span.org/video/?c4808083/rust-language-chosen the ? is sneaky cannon
- [D] https://melpa.org/#/async # is just redundant? cannon
- [D] Lisp Language http://wiki.c2.com/?LispLanguage ? is sneaky cannon
- [D] better regex for url extraction cannon
- [D] Vanquishing ‘Monsters’ in Foundations of Computer Science: Euclid, Dedekind, Frege, Russell, Gödel, Wittgenstein, Church, Turing, and Jaśkowski didn’t get them all … by Carl Hewitt :: SSRN cannon
- STRT [D] should be more defensive cannon
- cannon did I do it?** https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=955208 'bug' parameter
- cannon https://unix.stackexchange.com/questions/117609/capture-error-of-ls-to-file#comment183614_117609
- DONE [B] make sure ? extracted correctly https://play.google.com/store/apps/details?id=com.faultexception.reader cannon
- DONE [C] https://news.ycombinator.com/item?id=12973788 cannon
- cannon wiki.c2.com pages don't even have canonical?
- [D] * misc cannon
- STRT [B] would be convenient to normalise reddit annotations so annotations from all comments would be collected cannon
- TODO [C] potential pypi project? https://pypi.org/project/cannon cannon
- TODO [C] hypothesis: wonder how it works on timestamped archive.org stuff? cannon
- TODO [C] hmm some local and remote pages may overlap cannon
- [C] Vision, Mission & Values — 2020 Update - WorldBrain.io - Medium cannon
- [C] Changed how threading works. by JakeHartnell · Pull Request 952 · hypothesis/h https://github.com/hypothesis/h/pull/952 cannonhypothesisreddit
- TODO [C] reddit: tested on https://www.reddit.com/r/explainlikeimfive/comments/1vavyq/eli5_godels_ontological_proof/ceqlupx/ cannonhypothesis
- cannonhypothesis so look like reddit referes to the 'post' page as canonical. Right.
- -------------------------------------------------- cannon
- [C] URLTeam - Archiveteam cannon
- [C] seomoz/url-py: URL Transformation, Sanitization cannon
- [C] (5) Jon Borichevskiy (@jondotbo) / Twitter cannonpromnesia
- TODO [B] ClearURLs – automatically remove tracking elements from URLs | Hacker News cannon
¶[A] * motivation cannon
Once you are sold on motivation in this section, and wondering why would this require a separate library/database, check out "testcases" section.
¶[A] I want urls that represent information, regardless the way it's presented cannon
let alone all the tracking/etc crap
¶[B] "document equivalence" is a good term: How to establish (or avoid) document equivalence in the Hypothesis system : Hypothesis cannon
¶[B] why not use "rel=canonical" metadata field? cannon
¶[B] mobile versions of sites sometimes have different "canonical", e.g. mobile.twitter.com cannon
No one would argue that a tweet is the same regardless where it's presented, yet there is no easy way to unify this
¶[C] archive.org is messing with canonical cannon
¶[D] e.g. this link doesn't have 'canonical' even though it's a mirror: https://solar.lowtechmagazine.com/2016/11/the-curse-of-the-modern-office.html cannon
¶[D] no canonical on gist https://gist.github.com/dneto/2258454 cannon
same as https://gist.github.com/2258454 – hmm, this thing redirects now..
¶[B] parent and sibling relations can be determined from the URL cannonpromnesia
e.g. subreddit-post/user-comment/user-tweet, etc.
¶[B] if the original page is gone I can still easily link my saved annotations (Instapaper/Pocket/Hypothesis) to archived page cannon
¶[B] urls a good candidate to determine 'entities' because they sure at least somewhat curated cannon
¶[C] normalization is tricky.. for some urls, stuff after # is important https://en.wikipedia.org/wiki/Tendon#cite_note-14 . for some, it's utter garbage cannon
however we can sort of get away with normalizing on server only?
¶DONE [C] The Problem With URLs https://blog.codinghorror.com/the-problem-with-urls/ cannon
¶[C] motivation: siloing: instapaper 'imports' pages and assigns an id: https://www.instapaper.com/read/1265139707 cannon
so you can't connect your annotations on instapaper to notes etc
¶[C] could normalize historic URLs which are already down? cannonlinkrot
perhaps not super useful if we can't access them, but still
¶[A] * projects that could benefit from it cannon
Apart from Promnesia, I believe it could be quite useful for other projects.
¶STRT [B] Hmm could be helpful for hypothesis? cannonhypothesis
¶NEXT [B] discuss about cannon (maybe on Slack)? cannonhypothesis
¶[C] Annotation of content on sites like Facebook or Twitter? - Google Groups cannonhypothesis
kinda related since they basically want canonical urls
¶TODO [B] Ignore URL parameters - Feature Requests - Memex Community cannonworldbrain
¶TODO [C] wonder if we could cooperate? cannonagora
¶TODO [C] would be useful to use the same normalising engine for #archivebox for example? cannon
¶TODO [C] could be useful for surfingkey/nyxt browser to hint 'interesting' urls? cannon
¶STRT [C] archive.org cannonlinkrot
e.g. if the link is not present in archive.org, it doesn't mean it's not archived under a different canonical
¶TODO [C] if it's implemented as a helper extension/library, it could be useful for many other extensions cannon
e.g. blockers, various highlighters, hypothesis, etc
¶TODO [D] einaregilsson/Redirector: Browser extension (Firefox, Chrome, Opera, Edge) to redirect urls based on regex patterns, like a client side mod_rewrite cannon
¶TODO [D] could reuse URL underlying etc with ampie? cannonampie
¶[A] * prior art cannon
URL normalization algorithm should be shared with other projects to the maximum extent possible.
If not the exact algorithm, at least the 'curated' parts of it like regexes, testcases, etc should be shared.
It's a crap boring work that should be only done once (e.g. like timezones database).
¶TODO [A] ClearURLs / Addon: looks super super promising cannon
Once ClearURLs has cleaned the address, it will look like this: https://www.amazon.com/dp/exampleProduct
¶ https://github.com/ClearURLs/Addon/wiki/Rules: Not super convinced JSON would work well in general, but anyway it's already pretty good. cannon
¶TODO [B] WorldBrain/memex-url-utils: Shared URL processing utilities for Memex extension and mobile apps. cannonworldbrain
¶TODO [B] h/uri.py at 0fc8a0d345741d43b4f80856a7cbb8f5afa70f80 · hypothesis/h https://github.com/hypothesis/h/blob/0fc8a0d345741d43b4f80856a7cbb8f5afa70f80/h/util/uri.py cannonhypothesis
¶ excluded query params! cannonhypothesis
¶ right, I could probably reuse hypothesis's canonify and contribute back. looks very similar to mine cannonhypothesis
¶TODO [B] coleifer/micawber: a small library for extracting rich content from urls cannon
¶ ok, pretty interesting. it probably uses network, but could at least use it for testing (or maybe even 'enriching'?) cannon
¶TODO [C] sindresorhus/compare-urls: Compare URLs by first normalizing them cannon
compareUrls('HTTP://sindresorhus.com/?b=b&a=a', 'sindresorhus.com/?a=a&b=b');
¶[C] sindresorhus/normalize-url cannon
stripWWW can't handle amp etc
¶TODO [C] hypothesis: h/normalize_uris_test.py
cannon
¶TODO [C] niksite/url-normalize: URL normalization for Python cannon
¶TODO [C] john-kurkowski/tldextract: Accurately separate the TLD from the registered domain and subdomains of a URL, using the Public Suffix List. cannon
hmm could use this for better extraction…
¶[B] * ideas cannon
¶[B] maybe we can achieve 95% accuracy with generic rules and by handling the most popular websites cannon
for the rest
- allow user to customize
- allow user to submit normalization errors (where?)
¶TODO [B] if 'children' relations can't be determined by substring matching, perhaps cannon should generate 'virtual' urls? cannonpromnesia
¶TODO [B] a special service to resolve siloed links like t.co ? cannonlinkrot
Could also be useful for Archive.org/archivebox/etc. But a bit out of scope for this project..
¶STRT [B] just specify admissible regexes for urls so it's easier to unify? cannon
e.g. twitter.com/user/status/statusid
maybe normalise to this?
twitter.com/i/web/status/1053151870791835649
reddit.com/comments/5ombk8 – huh, normalise to this?
TODO m.readdit/old.reddit
en.m.wikipedia/ru.m.wikipedia
maybe stripp off subdom completely?
youtube.com/watch?v=xAy—wpDQ&list=PL0kyDgrqAiUEF5d7krLIds1ebhTxCjm&shuffle=221
youtube.com/watch?v=Woa3MPijE3s&list=PL0kyDgrqAiXKspaa1GIS0jbbLrsAa3sk&spfreload=10
¶ also this to summarize cannon
sqlite3 promnesia.sqlite 'select domain, count(domain) from (select substr(normurl, 0, instr(normurl, "/")) as domain from visits) group by domain order by count(domain)'
¶STRT [B] rethinking the whole approach… cannon
consider https://www.youtube.com/watch?v=wHrCkyoe72U&list=WL
basically
- cut of protocol just merely for simplicity? I guess makes everything much easier
- the result is always 'composed of' inputs. e.g. maps to youtube/wHrCkyoe72U, both parts are in the original link
might not be the case if domain names are remapped though.. e.g. youtu.be - sort query parts alphabetically
(although might make sense to make it hierarchy aware?) - treat parts & query the same way, parts are query with None keys
- to handle domain names better, replace dots before first / with : e.g. www.youtube.com -> www/youtube/com
then cat treat the same way as subpaths
i.e. we get
None www | drop
None youtube | keep
None com | drop
None watch | drop
list WL | keep? – actually this could be considered a 'tag'? unclear
v wHrCkyoe72U | keep
ok so how do we generalize from two examples?
e.g. say we also have
youtube.ru/watch?v=abacaba -> youtube/abacaba
we get
youtube | keep
ru | drop
watch | drop
v abacaba | keep
I suppose it could guess that if we keep a query parameter once, we'll keep it always?
and if we extracted a certain substring without a query parameter, we'll also always keep it as is?
TODO how about this?
https://news.ycombinator.com/reply?id=25100810&goto=item%3Fid%3D25099862%2325100810
it's a reply to https://news.ycombinator.com/item?id=25100035
which is a comment to https://news.ycombinator.com/item?id=25099862
¶TODO [C] use shared JS/python tests for canonifying? cannonffipromnesia
¶TODO [C] should be idempotent? cannon
¶TODO [C] hmm, maybe the extension can learn normalisation ruls over time? by looking at canonical and refining the rules? cannon
¶TODO [C] sample random links and their canonicals for testing cannon
¶TODO [C] background thing that sucks in canonical urls and provides data for testing? cannonpromnesia
¶TODO [C] how do we prune links that are potentially not secure to store? like certain URL parameters cannon
¶TODO [D] need checks that url don't contain stupid shit like trailing colons etc cannon
¶TODO [C] hmm could use this api for checking normalization? cannon
http get 'http://archive.org/wayback/available?url=https://stackoverflow.com/questions/1425892/how-do-you-merge-two-git-repositories' { "archived_snapshots": { "closest": { "available": true, "status": "200", "timestamp": "20210219235548", "url": "http://web.archive.org/web/20210219235548/https://stackoverflow.com/questions/1425892/how-do-you-merge-two-git-repositories" } }, "url": "https://stackoverflow.com/questions/1425892/how-do-you-merge-two-git-repositories" }
¶[C] * testcases cannon
Some tricky cases which would be nice to get right
¶[B] Wendover Productions - YouTube cannon
¶[B] roam links cannon
¶[B] A Relational Turn for Data Protection? by Neil M. Richards, Woodrow Hartzog :: SSRN cannon
abstractid
¶STRT [B] A Brief Intro to Topological Quantum Field Theories. - YouTube https://www.youtube.com/watch?v=59uLGIrkMxM&list=WL&index=61&t=0s cannon
eh, rules might be a bit complicated. E.g. if both v and list are present, we wanna ditch list, otherwise keep list
¶TODO [B] normalise DOI cannon
Ah sure: This DOI: https://doi.org/10.1073/pnas.1211902109 should lead to this paper: https://pnas.org/content/109/48/E3324 .
¶TODO [C] m.wikipedia normalisation could also be useful for hypothesis? cannonhypothesis
¶ X.m.wikipedia.org cannonhypothesis
¶ mm, it's got canonical though.. cannonhypothesis
¶TODO perhaps promnesia should respond both to canonical and its own idea of normalised (preferring canonical) cannonhypothesis
¶STRT [C] fragments: Aharonov-Bohm Experiment https://physicstravelguide.com/experiments/aharonov-bohm#tab__concrete cannon
url normalising… this is an example where fragments are important
¶ here I guess it could yield url with hash + parent url? cannon
¶TODO always assume that parents in uri hierarchy are actual parents? I guess that's fairly reasonable cannon
¶[C] stuff like this: youtu.be/1TKSfAkWWN0 cannon
¶ this is also motivation for canonifying. this is a redirect link in tweet, and there is no way to associate it with canonical cannon
¶[C] https://hubs.mozilla.com/#/ cannon
¶[C] Writing well | defmacro cannon
support for archive.org and test on this page
¶[C] Wayback Machine https://web.archive.org/web/2019*/http://www.defmacro.org/2016/12/22/writing-well.html cannon
¶[C] maybe https://youtu.be/zRxI0DaQrag?t=1380 ? cannon
¶[C] github: https://twitter.com/i/web/status/928602151286386688 this end up trimmed with … :( cannon
¶[C] github: https://twitter.com/i/web/status/1156086851633131520 cannon
¶[C] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=941827 cannon
https://wiki.debian.org/SecureBoot#MOK_-_Machine_Owner_Keycanonical: wiki.debian.org/SecureBootsources : notes[[https://wiki.debian.org/SecureBoot][SecureBoot - Debian Wiki]]
¶[C] https://undeadly.org/cgi?action=article;sid=20170930133438 cannon
'sid' matters here
¶TODO [C] hmm, server doesn't normalise properly?? (url escaping) cannon
ru.wikipedia.org/wiki/Грамматикализация
¶TODO [C] semiconductors video should be unified properly. well, or again hierarchical thing? might be too spammy for 'watch later' cannon
¶[C] https://cstheory.stackexchange.com/questions/1920/examples-of-unrelated-mathematics-playing-a-fundamental-role-in-tcs/1925#1925: need parent link to trigger on this in cannon cannon
¶[C] https://news.ycombinator.com/item?id=23537243#23540421 hmm, both id and # ? cannon
¶[C] https://bugzilla.mozilla.org/show_bug.cgi?id=1411873 : ugh need to keep id cannon
¶TODO [C] old.reddit and new reddit cannon
¶[D] handle google.com/search cannon
¶[D] https://www.c-span.org/video/?c4808083/rust-language-chosen the ? is sneaky cannon
¶[D] https://melpa.org/#/async # is just redundant? cannon
¶[D] Lisp Language http://wiki.c2.com/?LispLanguage ? is sneaky cannon
¶[D] better regex for url extraction cannon
eh, urls can have commas… e.g. http://adit.io/posts/2013-04-17-functors,_applicatives,_and_monads_in_pictures.html
so, for csv need a separate extractor.
¶[D] Vanquishing ‘Monsters’ in Foundations of Computer Science: Euclid, Dedekind, Frege, Russell, Gödel, Wittgenstein, Church, Turing, and Jaśkowski didn’t get them all … by Carl Hewitt :: SSRN cannon
¶STRT [D] should be more defensive cannon
ValueError: netloc ' +79869929087, mak34@gmail.com' contains invalid characters under NFKC normalization
¶ did I do it?** https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=955208 'bug' parameter cannon
¶ https://unix.stackexchange.com/questions/117609/capture-error-of-ls-to-file#comment183614_117609 cannon
¶DONE [B] make sure ? extracted correctly https://play.google.com/store/apps/details?id=com.faultexception.reader cannon
¶DONE [C] https://news.ycombinator.com/item?id=12973788 cannon
id here is important
¶ wiki.c2.com pages don't even have canonical? cannon
¶[D] * misc cannon
¶STRT [B] would be convenient to normalise reddit annotations so annotations from all comments would be collected cannon
¶TODO [C] potential pypi project? https://pypi.org/project/cannon cannon
¶TODO [C] hypothesis: wonder how it works on timestamped archive.org stuff? cannon
¶TODO [C] hmm some local and remote pages may overlap cannon
e.g. this is very likely to be mapped to normal py docss
file:///usr/share/doc/python3/html/library/contextlib.html
¶[C] Vision, Mission & Values — 2020 Update - WorldBrain.io - Medium cannon
fragments are often random and useless
even default org-mode is guilty
¶[C] Changed how threading works. by JakeHartnell · Pull Request 952 · hypothesis/h https://github.com/hypothesis/h/pull/952 cannonhypothesisreddit
¶TODO [C] reddit: tested on https://www.reddit.com/r/explainlikeimfive/comments/1vavyq/eli5_godels_ontological_proof/ceqlupx/ cannonhypothesis
huh, so reddit seems to normalise to the main page, and displays annotations as 'orphaned' for comment views?
¶ so look like reddit referes to the 'post' page as canonical. Right. cannonhypothesis
¶-------------------------------------------------- cannon
¶[C] URLTeam - Archiveteam cannon
¶[C] (5) Jon Borichevskiy (@jondotbo) / Twitter cannonpromnesia
hmm how to resolve twitter renames?…
¶TODO [B] ClearURLs – automatically remove tracking elements from URLs | Hacker News cannon
Related, if you're looking to clean urls on the backend, here's my current pattern: startswith: 'utm_', 'ga_', 'hmb_', 'ic_', 'fb_', 'pd_rd', 'ref_', 'share_', 'client_', 'service_' or has: '$/ref@amazon.', '.tsrc', 'ICID', '_xtd', '_encoding@amazon.', '_hsenc', '_openstat', 'ab', 'action_object_map', 'action_ref_map', 'action_type_map', 'amp', 'arc404', 'affil', 'affiliate', 'app_id', 'awc', 'bfsplash', 'bftwuk', 'campaign', 'camp', 'cip', 'cmp', 'CMP', 'cmpid', 'curator', 'cvid@bing.com', 'efg', 'ei@google.', 'fbclid', 'fbplay', 'feature@youtube.com', 'feedName', 'feedType', 'form@bing.com', 'forYou', 'fsrc', 'ftcamp', 'ga_campaign', 'ga_content', 'ga_medium', 'ga_place', 'ga_source', 'ga_term', 'gi', 'gclid@youtube.com', 'gs_l', 'gws_rd@google.', 'igshid', 'instanceId', 'instanceid', 'kw@youtube.com', 'maca', 'mbid', 'mkt_tok', 'mod', 'ncid', 'ocid', 'offer', 'origin', 'partner','pq@bing.com', 'print', 'printable', 'psc@amazon.', 'qs@bing.com', 'rebelltitem', 'ref', 'referer', 'referrer', 'rss', 'ru', 'sc@bing.com', 'scrolla', 'sei@google.', 'sh', 'share', 'sk@bing.com', 'source', 'sp@bing.com', 'sref', 'srnd', 'supported_service_name', 'tag', 'taid', 'time_continue', 'tsrc', 'twsrc', 'twcamp', 'twclid', 'tweetembed', 'twterm', 'twgr', 'utm', 'ved@google.', 'via', 'xid', 'yclid', 'yptr'