00:23:31 pabs: FYI, I'm going to let the ArchiveBot jobs for GNOME Bugzilla finish. It might be worth contacting them about the issue history (which is just gone) and XML export (for programmatic access, returns the normal view now) and possibly attachment description page (returns the attachment instead), but I won't have time for that anytime soon. 00:23:50 The actual issues and attachments exist, so at least that will be covered. 02:15:31 JAA: do you have some example URLs that are broken? if so I could file an issue (also if you have a GNOME GitLab account I could CC you) 02:17:53 pabs: Random example from a bug that was fully covered: https://bugzilla.gnome.org/show_bug.cgi?id=36951 → history https://web.archive.org/web/20210712102539/https://bugzilla.gnome.org/show_activity.cgi?id=36951 and XML version https://web.archive.org/web/20210712102539/https://bugzilla.gnome.org/show_bug.cgi?ctype=xml&id=36951 02:20:37 The attachment description page would be e.g. https://bugzilla.gnome.org/attachment.cgi?id=94167&action=edit . This URL was captured but isn't in the WBM yet because the WARC is still sitting on the ArchiveBot pipeline. 02:21:25 I don't have a GNOME GitLab account. 02:28:55 ok, I'll take a look later 02:29:15 Cheers 07:02:58 Tech234a edited YouTube (+80, /* Older unlisted videos (July 2021) */ Add…): https://wiki.archiveteam.org/?diff=47020&oldid=47013 07:17:00 Tech234a edited YouTube (+819, /* Older unlisted videos (July 2021) */ Add…): https://wiki.archiveteam.org/?diff=47021&oldid=47020 08:59:55 Is there a channel for Google Drive? 12:44:03 thuban: Your regex doesn't produce any results. (And I've scanned the whole dataset) 12:47:02 fuck, i left an asterisk out. '(file|image):\s*"([^"]*)",' 13:00:11 thuban: Yeah. I figured that. Do you care about having file and image separate, or do you just want one big list? 13:03:13 separate, if that wouldn't require any effort on your part; otherwise together 13:03:42 (i _think_ we already got all the thumbnails in the regular ab run, but i need to check) 13:04:57 Cool. That's easy. My system doesn't really do capture groups so I have to do a second pass to get the urls out of the 'image: ""' strings 13:05:19 It gives me a big list of regex matches per warc 13:05:28 And then I post-process from there 13:07:07 I also only process text/ and application/json entries. I don't match on image or video files, for obvious reaons 13:07:09 *reasons 13:09:57 thuban: I've updated the regexes and am re-running. It looks to be obtaining urls. 13:10:54 i actually tested it this time, haha 13:11:36 Cool 13:11:45 I'll get you a sample of one file just to check it by you 13:26:21 thuban: Here's a sample from one of the warc files: https://transfer.archivete.am/w7DSd/file.txt https://transfer.archivete.am/orEBX/image.txt 13:26:31 This look good to you? 13:27:11 yep! 13:27:37 Cool. Still processing the rest. But I'm doing this singlethreaded because I'm lazy. 13:28:06 It's got maybe 10 minutes left 13:32:32 thuban: It's a midly hacked up extractor, but it's doing the job. https://s3.services.ams.aperture-laboratories.science/rewby/public/2a4b8143-8fbb-406f-8880-503b8032405f/1627911131.0666816.png 13:34:17 nice 13:37:54 thuban: All done! https://transfer.archivete.am/15RnqL/file.txt https://transfer.archivete.am/CHsW5/image.txt 14:29:35 rewby: for some reason i'm getting only 86 unique urls from either of those files when there should be many more. 14:29:45 for example: https://app4.rthk.hk/special/rthkmemory/details/hk-footprints/108 is in the warcs, and running the (corrected) regex on that page yields 'file: "https://app4.rthk.hk/podcast/media/rthkmemory/b_v08.mp4",' and 'image: "https://rthkmemorycms.rthk.hk/photo/media/thumbnail/108",' 14:29:53 but 'https://app4.rthk.hk/podcast/media/rthkmemory/b_v08.mp4' is not in file.txt and 'https://rthkmemorycms.rthk.hk/photo/media/thumbnail/108' is not in image.txt. 14:30:06 Uh. Lemme check 14:31:50 idk what your plumbing looks like, but is it possible you ran one warc repeatedly instead of all the warcs? (24 warcs, 24 copies of each url i _do_ have) 14:32:13 (oh wait nvm, 25 warcs) 14:33:02 thuban: d'oh. I ran all the warcs, but I didn't concat the results properly 14:33:11 Lemme fix that 14:33:46 gotcha, thanks 14:33:58 thuban: How's this? https://transfer.archivete.am/JLRYd/file.txt https://transfer.archivete.am/sInqE/image.txt 14:34:52 Hm. Still not quite right I think 14:36:56 It's better but still not quite there 14:39:01 yeah... i do expect there to be a few copies of each result (each detail page has a base url and then two possible language parameters) but that's not what it looks like is happening 14:40:14 I'm double checking a few things. 14:40:27 Hmmm. 14:40:34 I wonder if we're dealing with an encoding problem 14:41:19 thuban: I'm doing another run with some tweaks that might help. 14:41:49 If you still find missing things, I'll have to go and manually dig into the warcs to see what wrong because that'll be a bug with my warc reader 14:43:33 i think to confirm anything missing i would have to manually download and zgrep the warcs--that other one was just a lucky spot-check 14:43:47 Fair enough 14:43:56 Just zgrepping doesn't always work 14:44:10 oh? 14:44:26 The problem is that warcs contain raw http responses. Which means your content can be encoded a number of ways. It's not uncomming to have a gzipped response or brotli compressed response 14:44:35 ah, yeah 14:44:35 *uncommon 14:45:09 There's a lot of screwery going on in this software to try and deal with this 14:46:32 i knew there was a reason i asked you instead of trying to do it myself ;) 15:14:12 thuban: Here's another attempt. I turned off all the "smart"ness. It should've gotten everything unless there was a decoding issue. https://transfer.archivete.am/ILUoM/file_unique.txt https://transfer.archivete.am/xBjlu/image_unique.txt 15:28:55 yeah, that's more consistent with what i was expecting 15:45:08 huh... so it looks like archivebot successfully got everything (except a couple of m3u8s) in the original run. i wonder why playback doesn't work in the wbm? 15:45:52 Are there any POST requests involved? 15:46:14 Or maybe javascript that's unhappy? 15:46:52 lol, the only requests that fail are jwplayer's jwpsrv.js and sharing.js, which have somehow been double-rewritten: e.g. https://web.archive.org/web/20210728093807/https://web.archive.org/web/20210728093807/https://ssl.p.jwpcdn.com/6/8/jwpsrv.js . 15:47:29 (single-rewritten does exist in the archive and presumably would work.) 15:47:59 Huh. Interesting quirk 16:02:08 https://web.archive.org/web/20210728093807js_/https://app4.rthk.hk/special/rthkmemory/assets/js/jwplayer/jwplayer.js 16:05:20 in the 'c.repo' function (which returns the base path jwplayer uses to get some assets) the url is rewritten once when the string literal with the original cdn's url is used, then again when the generated url string is munged for ssl 16:07:21 i guess there's no principled way to avoid this... 20:38:21 has anyone archived drivers, manuals, sdks and the like from canon's website? just figured i should ask before trying to archive it myself 20:38:38 Got a link to the site and we can check? 20:41:27 example page for a specific camera: https://www.usa.canon.com/internet/portal/us/home/support/details/cameras/point-and-shoot-digital-cameras/slim-stylish-cameras/powershot-a2500/powershot-a2500?tab=drivers_downloads 20:41:36 and the place where i got that link from: https://www.usa.canon.com/internet/portal/us/home/support/drivers-downloads 20:43:08 Hmm, I could give it a go in AB and see how it goes 20:44:00 might work for things like the reference photos, but the section that lists downloads uses js and probably would need manual work to scrape 20:44:08 i was writing a lua script to do exactly that 20:45:21 Urgh, same for the manuals, it's all js 20:45:42 It's all running through AB now anyway so we at least grab what we can 20:48:23 if you write your script to get the urls for the downloads, we can run that list through archivebot, too, so that at least the files will be in the wayback machine 20:50:48 ^Forgot about that 20:55:05 in my list of urls, should i include the original urls or the ones they redirect to? since all of them are redirects 20:55:10 e.g. https://pdisp01.c-wss.com/gdl/WWUFORedirectTarget.do?id=MDMwMDAxMDYyODAx&cmp=ABR&lang=EN 20:55:45 Original means we'll archive the redirect too 20:55:46 archivebot can follow redirects, so it's probably best to use the originals (since that way both will point to the file) 20:55:57 ^ what he said 21:00:44 Uh 21:01:04 Generally yes, but it depends. 21:01:48 If all of the downloads behave like the above, i.e. the actual downloads are on a different host, it's fine. 22:25:42 OrIdow6 edited Framasoft (+79, Correction on discovery source): https://wiki.archiveteam.org/?diff=47022&oldid=47014