00:00:11 Awesome. I do have about 1.5 million of the smaller files downloaded, although since it's only the creature file and not the entire page I'm unsure of the relevance 00:00:11 Looks like the stuff that was run before was https://spore-cr.ucoz.com/ and some stuff on staging.spore.com (https://transfer.archivete.am/inline/xEMox/staging.spore.com_seed_urls.txt specifically) 00:00:51 hmm, http://www.spore.com/sporepedia#qry=pg-220 looks to be handled via POST which archivebot can't do 00:03:25 sporepedia itself says 191,397,848 creations to date, but the browse tab says 1,769 newest creations - http://www.spore.com/sporepedia#qry=st-sc looks like it goes on through 503,904 things though 00:04:55 What I believe is the case is that it starts indexing at 500,000 00:05:26 or- wait. 500,000,000,000 00:10:16 some interesting URLs: http://www.spore.com/view/myspore/erokuma http://static.spore.com/static/thumb/501/110/199/501110199667.png http://static.spore.com/static/image/501/110/199/501110199667_lrg.png 00:10:50 It also says you can drag the thumbnail into the spore creator app but I'm not sure how that works (if there's an additional URL for extra data or they're hiding it in the image somehow) 00:11:05 They're hiding it in the image somehow 00:12:50 Just to verify that, I'm gonna go into spore, turn off my internet, and pull one in 00:14:26 Looks like I found an article about it: https://nedbatchelder.com/blog/200806/spore_creature_creator_and_steganography.html 00:15:42 I'll post other possibly relevant URLs. https://www.spore.com/comm/developer/ https://www.spore.com/comm/samples (the latter has a list of possibly relevant urls) 00:18:10 there's also e.g. http://www.spore.com/sporepedia#qry=sast-501110199667%3Apg-220 which does a POST to http://www.spore.com/jsserv/call/plaincall/assetService.fetchComments.dwr (a POST to http://www.spore.com/jsserv/call/plaincall/assetService.listAssets.dwr also exists). POSTs don't work with web.archive.org unfortunately 00:18:52 The API docs there are helpful 00:19:31 That is unfortunate. However as the API docs show, the files can be accessed directly, although that will miss out on users, comments, etc 00:20:25 actually- I disregard that last statement about missing out, as I don't know how to read the XML files 00:21:40 Theoretically we could generate WARCs containing the POST data if a whole custom crawl were done, they just wouldn't allow navigating the site directly on web.archive.org as it stands today (theoretically it could be implemented in the future, but I think there are technical complications) 00:22:23 Pedrosso: if I understand correctly, we *can* get the user and comment, we just can't display it as a functional website on web.archive.org 00:22:30 user and comment data* 00:22:41 That is awesome, thank you 00:23:06 so it will be a pile of xml or whatever waiting for someone to make a tool to read it 00:24:45 We *can* if something custom was implemented - archivebot wouldn't work for it (though giving archivebot millions of images as a list also isn't easy since it needs the list ahead of time; you can't just tell it the pattern the images follow) 00:29:37 I've no clue how I didn't find it before but there's a page on the archiveteam wiki with also possibly relevant info https://wiki.archiveteam.org/index.php/Spore 00:31:50 I started an archivebot job for http://www.spore.com/ but that's not going to recurse into anything that's accessed via javascript only (so it's not going to find everything on the character creator) 00:34:50 Pokechu22 edited Spore (+211, mention that the thumbnails include data): https://wiki.archiveteam.org/?diff=51106&oldid=51087 00:46:48 pokechu22: You say archivebot needs the list ahead of time, could you elaborate on that? Because I mean, making a very long list full of urls following the pattern is possible, no? 00:47:43 Yeah, it's definitely possible, not too difficult even, but if http://static.spore.com/static/thumb/501/110/210/501110210233.png implies there are 1,110,210,233 images, I think that exceeds some of the reasonable limits :) 00:48:51 it's possible to upload zst-compressed text files to transfer.archivete.am and then remove the zst extension to download it decompressed, which helps a bit, but archivebot still downloads it decompressed (and ends up uploading that decompressed list to archive.org without any other compression) 00:49:05 You'll have to forgive me as I've no real basis on what's reasonable 00:50:14 Yeah, I'm trying to dig up an example of when I last did this 00:50:22 Thank you 00:50:34 (the info on the archivebot article on the wiki is fairly out of date - we can and regularly do jobs a lot larger than it recommends there) 00:51:15 A billion images? Oh dear... 00:52:12 Request rates of something like 25/s are possible in AB, but then we'd still be looking at something like 1.5 years... 00:52:46 If I interpret the infromation correctly, many urls in that pattern could be pointing to nothing 00:53:08 That is likely, given that the site itself says there are only 191 million creations. 00:53:35 So roughly every 6th URL will work. 00:54:12 But it doesn't matter for this purpose since we'd still have to try the full billion. 00:54:31 That's true. Unless there's any way to check if it exists beforehand 00:54:46 Well, any reasonable way 00:55:58 Yeah, maybe the API has some bulk lookup endpoint. Otherwise, probably not. 00:57:36 which API? Sporepedia's? 00:58:45 Yeah 01:01:10 OK, right, the example I had was https://wwii.germandocsinrussia.org/ of which there were 54533607 URLs related to map tiles (e.g. https://wwii.germandocsinrussia.org/system/pages/000/015/45/map/8f3b4796a50501d2550bad6385f57cf65d78ca736f78d93dbfe7fc063bf0d396/2/2_0.jpg - but at a bunch of zoom levels), which I generated by a script. I split the list into 5 lists of 11000000 01:01:13 URLs, which ended up being about 118.1 GiB of data per list. I ran those lists one at a time (starting the next one after the previous one finished); it took about 2 hours for archivebot to download each list of 11M URLs and queue it (as that process isn't very optimized), and it took about 5 days for it to actually download the URLs in that list (though I don't think that's 01:01:15 representative of actual speeds for downloading...) 01:01:52 In other cases (which I can't find) I did parallelize the process between a few AB pipelines, and each pipeline downloads multiple files at once, but it's still not ideal 01:02:24 That job is still fairly comparable though because it's downloading a bunch of low-resolution images 01:03:29 even the "large" images (same item, different image, same info for the char just higher res I belive) are approx 60 kB 01:05:11 The storage space for downloaded images probably isn't an issue overall (as that can be uploaded to web.archive.org in 5GB chunks), it's more the storage space used for the list of URLs and such 01:05:33 (quite ironic) 01:06:23 Similarly, I'm not sure how useful it'd be to save the "large" images as it seems like they don't have the embedded data, unlike the "thumb" images, so presumably it'd be possible to regenerate the large images from the data in the thumb images ingame, which is the opposite of how thumbnails/high resolution images usually work 01:07:26 That's fair 01:11:39 Assuming 20kB for thumbnails and the listed 191,397,848 creatures, that's about 4TB, which is a reasonable amount (on the large side, but still reasonable) 01:12:33 Would it be relevant to save comments as well? I'd suggest users but that process is far less iterable 01:13:27 It looks like comments requires POST so archivebot can't do that, but those would be nice to save 01:13:45 This? https://www.spore.com/rest/comments/500226147573/0/5000 01:13:52 5000 is just an arbitrarily big value I put there 01:14:41 for what it's worth https://www.spore.com/ gives me an expired certificate error though http://www.spore.com/ works - I'm guessing you dismissed that error beforehand? 01:15:22 I don't recall, so assume that I have 01:15:35 Looks like that API works: http://www.spore.com/rest/comments/500447019787/0/5000 - it's not the one used on http://www.spore.com/sporepedia#qry=sast-500447019787%3Aview-top_rated though 01:15:56 As long as it's the same information it's all good, right? 01:16:22 As for users though, it does seem like there's a "userid" however I can't see anywhere you can put it to get the url for the userpage 01:16:39 Yeah, at least for having the information - it wouldn't make the first URL work on web.archive.org but that's not as important 01:19:21 Ah, nedbat wrote that thumbnail data article, nice. 01:19:58 So, what'd need to be done is to get that URL list and split it in reasonable chunks? 01:22:53 I should also note that archivebot isn't the only possible tool 01:23:22 That is good to note, yes. Though I'm not really aware of many of the others 01:24:51 If there are really no rate limits, qwarc could get through this in no time. 01:25:35 I've done 2k requests per second before with qwarc. 01:25:51 That'd work out to a week for 1.1 billion. 01:26:52 I wouldn't say there are none, but they may not be too limiting. I have nothing against running it on my own machine, but I'm not really aware of how to use it properly as of now 01:27:33 Well, 'not too limiting' and 'allowing 2k/s' are two very different things. :-) 01:27:34 archiveteam also has https://wiki.archiveteam.org/index.php/DPoS where you have a bunch of tasks distributed to other users, and a lua script that handles it. So creature:500447019787 could be one task and that would fetch http://www.spore.com/rest/comments/500226147573/0/5000 and http://www.spore.com/rest/creature/500226147573 and 01:27:37 http://static.spore.com/static/image/500/226/147/500226147573_lrg.png and http://static.spore.com/static/thumb/500/226/147/500226147573.png and http://www.spore.com/rest/asset/500226147573, and could queue additional stuff based on that (e.g. /rest/asset gives the author, which could be queued) 01:28:06 It's fully scriptable... but that means you need to write the full script :) 01:28:15 So it's a lot more difficult to actually do it 01:28:30 Yeah, same with qwarc. 01:28:33 (oh, and DPoS projects can also record POST requests, though they still won't play back properly) 01:28:35 Frameworks for archiving things at scale. 01:29:34 Yeah, qwarc being a single-machine system instead of distributed 01:30:45 I wouldn't mind trying to run qwarc, Anything I should be aware of? 01:32:03 Beware of the dragons. 01:32:23 :-) 01:32:25 😳 01:33:56 There's no documentation, and there are some quirks to running it, especially memory-related. There's a memory 'leak' somewhere that I haven't been able to locate. With a large crawl like this, you're going to run into that. 01:35:24 What a convenient time for my internet to drop, hah 01:35:55 https://hackint.logs.kiska.pw/archiveteam-bs/20231108 01:37:36 ? 01:38:07 (message history, in case you missed anything) 01:38:11 Thank you 04:30:14 This is where I can notify people of a website shutting down, right? Just making sure I have the right channel. 04:34:58 yes 04:35:08 which website, when is the shutdown? 04:38:33 The website is brick-hill.com, unsure of any exact date of shutdown but I do know of plans to re-launch the site due to ownership issues but without accounts and I assume forum posts by extension. 04:41:29 Huh. Brickset shut down their forums the other day. Is that just a coincidence? 04:42:18 Never heard of it, so probably. I also assume it wasn't as messy. 04:43:06 That was https://forum.brickset.com/ (a few pages are still in their server-side cache). 04:43:14 And yeah, not very messy. 04:44:36 Just funny that we go years without any LEGO-related shutdowns (that I remember), and then there's two in quick succession. 04:45:34 https://www.brick-hill.com/ does seem to work fairly well without JS, so that's nice. 04:45:58 Technically Brick-Hill is a Roblox clone but resemblances to Lego Island weren't accidental. 04:46:23 looks like the www/blog/merch subdomains have been captured previously 04:46:29 https://archive.fart.website/archivebot/viewer/?q=brick-hill.com 04:46:41 oh, some of them relatively recently 04:46:53 20230904 04:47:04 Hmm, https://archive.fart.website/archivebot/viewer/job/202309041546573rfz7 seems very small for well over 2 million forum threads. 04:47:59 could be one of those forums that wont let you view some boards if you arent signed in 04:48:20 I'm pretty sure you don't need to be logged in to see the forums. 04:48:25 80% or so are in a single forum that is publicly viewable. 04:49:45 Hmm, no, the job did go deep there, too: https://web.archive.org/web/20230913003824/https://www.brick-hill.com/forum/2/40000 04:52:16 Ryz: ^ You started that job. 04:52:47 Ok yeah, it got 200s from 2352033 unique thread IDs. I guess that should be reasonably close to the total. 07:04:07 I'm definitely completely wrong about this, but if we had #Y working, would we need dedicated projects for sites anymore or could they be run through that with modification? Would we need AB? 07:48:34 vokunal|m: we would still need dedicated projects for sites that couldn't be crawled by generic spidering logic (because eg they depend on javascript api interactions). 07:51:03 theoretically it could do anything that archivebot could, but between overhead and the increased complexity of 'live' configuration in a distributed environment, we'd want to keep ab around anyway 08:07:21 from #archivebot the energy company ENSTRAGO has been declared bankrupt. Here is the court annoucement: https://insolventies.rechtspraak.nl/#!/details/03.lim.23.189.F.1300.1.23 and here the official website: https://enstroga.nl 08:30:10 A12 taxi zoetermeer : https://www.taxizoetermeer.nl is declared bakrupt. courtfiles: https://insolventies.rechtspraak.nl/#!/details/09.dha.23.294.F.1300.1.23 10:19:07 Jwn: brickset is a lego fansite, forum got too expensive and activity declined. Luckily we caught it just before the shredders were starting 13:43:00 On the note for spore, I found that https://staging.spore.com/ has its "static" and "www_static" subdirectories open; as far as i can tell, everything on there is also on the regular non-staging website, so it may be safe to extract everything, strip "staging.", and archive the main file links 17:50:50 Hello. I have a quick question. Is it possible to find a deleted private Imgur album among the huge archive dump with an album URL link or a link to a single image that was within the album? 17:52:57 Edel69: If it was archived, it's in the Wayback Machine. Album or image page or direct image link should all work. 17:57:53 Thanks for the response. I was under the impression that the team behind the archive job has actual access to files that were downloaded and backed up before the May 2023 TOS change went into effect. 18:01:13 Well, the raw data is all publicly accessible, but trust me, you don't want to work with that. :-) 18:02:07 I wouldn't even know what to do with all of that. lol 18:02:09 The WBM index should contain all of them, and that's the far more convenient way of accessing it. 18:06:29 Regarding https://www.brick-hill.com/forum/ - JAA, hmm, I'm a bit iffy on how much is it covered, because I recall the last couple of times it got errored out from overloading or something...? 18:23:51 So I tried multiple album and separate image URLs in the Wayback Machine and I get no hits at all. I don't think any of my deleted account's uploads have been archived on there. None of my albums were public, so it wouldn't have been possible for there to be Web archives maybe? My decade old account was abruptly deleted with no warnings just a few 18:23:51 days ago, so if there's nothing at all I guess that means my data was somehow not archived. 18:27:01 I think we should've grabbed virtually all 5-char image IDs. But beyond that, it would've been mostly things that were publicly shared in one of the sources we scraped. 18:41:12 I finally got a hit from one of the limited URLs I have. https://i.imgur.com/eClDaR3.jpg - An image from a Resident Evil album. I guess this wouldn't help in finding anything else that was in the same album though. 18:52:27 Isn't the image ID in the URL link? If, so they're all 7 characters. 18:53:11 Yeah 18:53:35 really old urls can be 5 characters though 18:54:01 they went through all the 5 character ids before upping their ids to 7 characters 18:55:09 Ah, so the 7 character IDs were also backed up. I was thinking he was saying that they only grabbed the 5 character IDs. 18:55:55 We didn't get all 5char albums unfortunately, virtually all 5 char images should be saved and then most 7char ones we found 18:56:57 We grabbed basically all 900M 5char images, and around 1 billion 7 character images, i think 18:57:27 we brute forced a lot, but there's 3.5 trillion possible ids in the 7 character space 18:58:34 just guessing on the 5char, because i thik that's what that would pan out to with our total done 19:02:50 That's a lot of downloading you all did. With that massive amount of data it would be looking for the needle in the haystack to find anything specific I guess, let alone a specific album collection. I'm just going to cut my losses and forget about it lol. Thanks for the help and information though. 19:04:07 Edel69: for 5char albums there is metadata which might be easier to search through https://archive.org/details/imgur_album5_api_dump 19:04:23 still a lot of data though 19:05:46 I don't have these local anymore unfortunately, could've done a quick search otherwise :( 19:08:54 → #imgone for further discussion please 20:40:20 Manu edited Political parties/Germany (+63, /* CDU: more */): https://wiki.archiveteam.org/?diff=51107&oldid=48436 21:20:59 It appears that, according to others, ArchiveBot is putting pressure on spore.com hence I'm not planning to do that archive instantly using qwarc. I am going to keep looking into it though. 21:21:40 On another note, what kind of motivations (if any) are needed for using the ArchiveBot? I've got a few small sites in mind, but I've mostly no good reason other than "I want em archived lol" 21:29:33 For small sites that's probably good enough of a motivation right now as there's nothing urgent that needs to be run 21:30:45 I'm not entirely sure about the amount of pressure - archivebot was slowed to one request per second and it seems like the site is giving a response basically instantly (and it didn't look like things were bad when it was running at con=3, d=250-375) 21:31:11 but I'm also not monitoring the site directly and am not an expert 21:31:49 Alright, that's good. As for offsite links, if I'm understanding this correctly, it goes recursively within a websites, but doesn't do so with outlinks? 21:32:03 The intent is to provide ~~players~~ archivists with a sense of pride and accomplishment for ~~unlocking different heroes~~ slowing down the servers. 21:32:40 It does do outlinks by default; the outlink and any of its resources (e.g. embedded images, scripts, audio, or video (if it's done in a way that can be parsed automatically)) will be saved 21:32:50 There is a --no-offsite option to disable that but it's generally fine to include them 21:33:01 I also wouldn't expect AB to make a difference for a website by a major game publisher, but you never know. 21:33:08 It was already slow last night before we started the job. 21:33:43 Haha 21:36:00 also, I was not informed of any restrictions of commands lol. Makes sense to not let people just randomly do it but I didn't find anything on the wiki like that 21:40:45 > Note that you will need channel operator (@) or voice (+) permissions in order to issue archiving jobs 21:41:34 It's not mentioned in the command docs though, only on the wiki page. 21:43:01 Thank you 21:47:16 I've been trying to separate the docs of 'ArchiveBot the software' from 'ArchiveBot the AT instance'. But the permissions part should be in the former, too. 22:06:38 Manu created Political parties/Germany/Hamburg (+7072, Beginn collection political parties for…): https://wiki.archiveteam.org/?title=Political%20parties/Germany/Hamburg 23:00:49 JAABot edited CurrentWarriorProject (-4): https://wiki.archiveteam.org/?diff=51109&oldid=51000