07:42:44 34 07:59:21 arkiver: Is current best practice for a site (Egloos) with external links still to get them in the grab, or is there a way I can send them to #// somehow? 08:09:46 OrIdow6: best practice is to send outlinks to #// ! 08:10:03 just add them to a table and i'll add a backfeed key for it 08:14:07 arkiver: Alright, thanks 08:16:50 what would be the best wget settings for warc files? I'm trying to mirror something that needs cookies and I want to test out mirroring them with warc so I can go through urls faster then just submit them here later 08:23:10 nighthnh099_: use grab-site, wget is bugged 08:26:54 oh okay, from what I'm reading this is entirely warc? I'll stick to wget for personal mirrors I guess 08:27:56 yes, but you can extract WARCs later or use a tool like replayweb.page. wpull/grabsite has advantages over wget on queue management, too 08:28:22 wget does its retries immediately when a request fails, wpull puts them at the end (useful if something 500s and some time waiting unsticks it) 08:29:03 and wget keeps the entire queue in memroy, have fun with large forum crawls with many URLs, that can get a few gigabytes of RAM usage just for the queue. wpull uses a sqlite db for that 08:50:40 hmm 08:51:14 JAA: masterx244|m: do we have a list anywhere of what would make Wget-AT more usable for the regular user outside of Warrior projects? 08:51:48 I could try to make changes to Wget-AT to support it, or see if we can make some general Lua script that can support this. 08:52:03 I read: 08:52:11 - retries should be different 08:52:21 - wget should not keep entire queue in memory 08:52:33 perhaps we can compile a list of this? 08:56:11 not sure if wget has the ignores-from-files feature, too 08:56:41 being able to adjust ignores on-the-fly is really useful when you see a rabbithole that appeared mid-crawl like a buggy link-extraction 08:57:16 had a forum once where a :// was goofed up and that caused grab-site to cause endless link mess until i dished out some well-defined ignores 08:57:33 arkiver: ^ 08:58:24 the queue-as-db with ignored urls in it, too is also useful for when you manaually need to do some url extraction. 09:03:00 thanks a lot masterx244|m 09:03:14 yeah any ideas anyone have, please dump them here! we might compile them into a document later on 09:05:37 the grab-site interface is also useful once you got multiple crawls running, switching screen-s on linux is also annoying. 10:38:24 OrIdow6: not sure what you qualify as "outlinks", but I usually take any correct URL found that we would not get in the specific warrior, and queue that to #// 10:38:42 that might also include certain embeds, etc., that we would not get in the project itself 10:43:45 Arkiver edited GitLab (+724, Merge edit by [[Special:Contributions/Nemo…): https://wiki.archiveteam.org/?diff=49879&oldid=48787 10:43:55 finally fixed that merge conflict 10:45:46 Arkiver edited Deathwatch (+131, Merge edit by [[Special:Contributions/Taka|Taka]]): https://wiki.archiveteam.org/?diff=49880&oldid=49877 10:46:45 caught some outdated data on reddit project page 10:46:46 MasterX244 edited Reddit (+26, Resync'd archiving status): https://wiki.archiveteam.org/?diff=49881&oldid=49878 10:47:11 not sure if we should switch reddit to "endangered" on the wiki 10:47:46 Arkiver edited Zippyshare (+1109, Fix conflict): https://wiki.archiveteam.org/?diff=49882&oldid=49671 10:48:06 masterx244|m: i'd say it's no endangered 10:48:10 not* 10:49:19 masterx244|m: you're now an automoderated user, your edits will automatically be applied 10:49:46 Arkiver changed the user rights of User:MasterX244 10:50:02 we don't know whats happening after the blackout... and fingers crossed that we don't run into target limits on reddit 10:50:31 some of the content on reddit is endangered yes 10:50:45 but reddit itself, I'm not sure, I don't think they're close to running out of money 10:51:26 they seem to just be trying to increase revenue ahead of the IPO 10:51:56 unlike twitter - where there were serious money concerns, as also noted by messages from elon musk 10:54:44 reddit description on frontpage is outdated, too. it still shows "planned" on old reddit posts 16:38:06 arkiver: Two more things come to mind: setting and adjusting request delays, and easier compilation including different zstd versions (which I know you're already aware of). 16:39:29 Actually, I guess delays can already be done via the existing hooks. 16:40:16 But if we count that, reading and applying ignores from a file is also possible. 16:50:13 JAA: on zstd versions - just install a different zstd version and compile against that? 16:50:20 with* 16:50:59 arkiver: Probably yes, but needs auditing that it actually works correctly across some reasonable range of versions. 16:54:47 right yeah 16:55:06 (Ideally with a proper test suite on the upcoming CI.) 16:55:10 well i made this yesterday https://github.com/ArchiveTeam/wget-lua/issues/15 16:55:25 Yeah, I saw, that's why I added those brackets above. :-) 16:55:38 test suites are not strong suite 16:55:52 but yeah i guess 16:56:46 Yeah, don't worry about that yet, CI needs to be running first anyway. It shouldn't be too difficult to just do some simple integration tests on it, retrieving a couple pages with and without a custom dict, verifying that the produced file has one frame per record etc. 16:57:04 yep 16:57:07 But it would allow us to continuously test against newly released zstd versions, which would be nice. 16:57:07 that sounds good 16:57:13 indeed! 16:57:38 i've never looked much into all the github automation (testing/building/whatever), so some examples and help there would be welcome 16:58:00 GitHub Actions is meh, we'll have something self-hosted soon. 16:58:13 (github automation is the wrong word - i mean git repo hosting services automation i guess) 16:58:27 The logs are not publicly accessible and get wiped after a couple months. 16:58:39 right 16:59:06 Very annoying when you come across an old open issue about 'something went wrong in this run: ' which just goes 404. 17:00:43 Summary of how that automation works: webhook notifies the CI, which then pulls from GitHub and runs whatever's configured, reporting the status back to GitHub so it can display pass/fail. 17:01:37 Anyway, soon™! 17:01:49 well let's get something up when we reach that point 17:02:02 meanwhile i still have to release proper FTP archiving support for Wget-AT 17:02:26 ... which is only stuck on that FTP conversation record order thing 17:02:36 so will make a decision on that this week and just push it out 17:03:37 in short - going purely with WARC specs we'll need a third record to note order of FTP conversation records, if we don't go with the WARC specs we'll only need a new header 17:03:53 i'm leaning towards going with following WARC specs 17:04:02 (not making up a new header) 17:04:09 -dev? 17:04:43 copied to there 17:24:08 ooh there's a -dev 17:24:12 sounds nerdy and fun 22:05:58 TheTechRobo edited ArchiveTeam Warrior (+81, Clarify that "Project code is out of date"…): https://wiki.archiveteam.org/?diff=49883&oldid=49631 22:30:09 Heya. Many months ago, I downloaded about a million posts and 500k comments from an old, public forum (est. 2008) who's admin hasn't been active for years. It's exhaustive as of 3 months ago. I've been storing it in a local PgSQL database on my computer, but I don't trust myself to do that responsibly... My computer's falling apart. What is the proper way to offload this (where can it be stored, what format should it be stored in, ect) 22:37:00 Probably the best thing to do is upload whatever you have to archive.org, even if it's not the most convenient format, so that even if something goes wrong at least something's available 22:39:32 as in, just upload the database to archive.org and put it in the community data collection (I assume PgSQL database files are fairly portable?) 22:40:38 pokechu22: No, but conversion to sqlite is possible, and I wouldn't upload it raw anyway because the API is horrendous and exposes IPs 22:40:56 * the API that I pulled from is horrendous 22:42:08 also is the forum still up? 22:42:24 nicolas17: Yes it is, I was just worried about it 22:42:33 Oh, one thing that would also be useful is if you could extract imgur links for our imgur archival project (see https://wiki.archiveteam.org/index.php/Imgur) - if you can get a list of things that look vaguely like imgur links then we can feed them to a bot that will queue valid links for archival 22:42:50 pokechu22: I can do that for you :) 22:43:09 Though I doubt there's much imgur links 22:43:21 EvanBoehs|m: in that case, in addition to your download, maybe we can archive it (again) in website form so it's usable on the wayback machine 22:43:35 There's also a mediafire project (https://wiki.archiveteam.org/index.php/MediaFire) 22:44:49 nicolas17: Hmm. Maybe... Is it easier just to reconstruct the page for each URL? That would be trivial enough 22:45:23 no, wayback machine wants the exact http response from the server for it to count as preservation 22:46:06 Got it. And as a result of this preservation, anyone can enter URLs into the wayback machine? 22:46:13 Like the "save page" does but in bulk 22:46:38 * wayback machine to see the historic pages? 22:47:38 We've got archivebot which lets us recurse over pages on a site and then everything that saves ends up in the wayback machine 22:48:26 but, files (in the WARC format) saved by random people generally aren't added to web.archive.org. If you've got a list of URLs though there are tools that can save each URL in the list and then that will end up on the wayback machine 22:48:52 pokechu22: Oh interesting, so you guys have special permission 22:48:53 * special permission? 22:49:18 Yeah 22:51:50 I see. It's probably easier because I have a list of all the URLs with content (like the imgur project) as opposed to recursively downloading something like 910000 pages. I'd quite like them all in the way back machine, so would the best first step be to program a warrior or... 23:09:33 Sex slave