-
le0n
34
-
OrIdow6
arkiver: Is current best practice for a site (Egloos) with external links still to get them in the grab, or is there a way I can send them to #// somehow?
-
arkiver
OrIdow6: best practice is to send outlinks to #// !
-
arkiver
just add them to a table and i'll add a backfeed key for it
-
OrIdow6
arkiver: Alright, thanks
-
nighthnh099_
what would be the best wget settings for warc files? I'm trying to mirror something that needs cookies and I want to test out mirroring them with warc so I can go through urls faster then just submit them here later
-
masterx244|m
nighthnh099_: use grab-site, wget is bugged
-
nighthnh099_
oh okay, from what I'm reading this is entirely warc? I'll stick to wget for personal mirrors I guess
-
masterx244|m
yes, but you can extract WARCs later or use a tool like replayweb.page. wpull/grabsite has advantages over wget on queue management, too
-
masterx244|m
wget does its retries immediately when a request fails, wpull puts them at the end (useful if something 500s and some time waiting unsticks it)
-
masterx244|m
and wget keeps the entire queue in memroy, have fun with large forum crawls with many URLs, that can get a few gigabytes of RAM usage just for the queue. wpull uses a sqlite db for that
-
arkiver
hmm
-
arkiver
JAA: masterx244|m: do we have a list anywhere of what would make Wget-AT more usable for the regular user outside of Warrior projects?
-
arkiver
I could try to make changes to Wget-AT to support it, or see if we can make some general Lua script that can support this.
-
arkiver
I read:
-
arkiver
- retries should be different
-
arkiver
- wget should not keep entire queue in memory
-
arkiver
perhaps we can compile a list of this?
-
masterx244|m
not sure if wget has the ignores-from-files feature, too
-
masterx244|m
being able to adjust ignores on-the-fly is really useful when you see a rabbithole that appeared mid-crawl like a buggy link-extraction
-
masterx244|m
had a forum once where a :// was goofed up and that caused grab-site to cause endless link mess until i dished out some well-defined ignores
-
masterx244|m
arkiver: ^
-
masterx244|m
the queue-as-db with ignored urls in it, too is also useful for when you manaually need to do some url extraction.
-
arkiver
thanks a lot masterx244|m
-
arkiver
yeah any ideas anyone have, please dump them here! we might compile them into a document later on
-
masterx244|m
the grab-site interface is also useful once you got multiple crawls running, switching screen-s on linux is also annoying.
-
arkiver
OrIdow6: not sure what you qualify as "outlinks", but I usually take any correct URL found that we would not get in the specific warrior, and queue that to #//
-
arkiver
that might also include certain embeds, etc., that we would not get in the project itself
-
h2ibot
Arkiver edited GitLab (+724, Merge edit by [[Special:Contributions/Nemo…):
wiki.archiveteam.org/?diff=49879&oldid=48787
-
arkiver
finally fixed that merge conflict
-
h2ibot
Arkiver edited Deathwatch (+131, Merge edit by [[Special:Contributions/Taka|Taka]]):
wiki.archiveteam.org/?diff=49880&oldid=49877
-
masterx244|m
caught some outdated data on reddit project page
-
h2ibot
MasterX244 edited Reddit (+26, Resync'd archiving status):
wiki.archiveteam.org/?diff=49881&oldid=49878
-
masterx244|m
not sure if we should switch reddit to "endangered" on the wiki
-
h2ibot
Arkiver edited Zippyshare (+1109, Fix conflict):
wiki.archiveteam.org/?diff=49882&oldid=49671
-
arkiver
masterx244|m: i'd say it's no endangered
-
arkiver
not*
-
arkiver
masterx244|m: you're now an automoderated user, your edits will automatically be applied
-
h2ibot
Arkiver changed the user rights of User:MasterX244
-
masterx244|m
we don't know whats happening after the blackout... and fingers crossed that we don't run into target limits on reddit
-
arkiver
some of the content on reddit is endangered yes
-
arkiver
but reddit itself, I'm not sure, I don't think they're close to running out of money
-
arkiver
they seem to just be trying to increase revenue ahead of the IPO
-
arkiver
unlike twitter - where there were serious money concerns, as also noted by messages from elon musk
-
masterx244|m
reddit description on frontpage is outdated, too. it still shows "planned" on old reddit posts
-
JAA
arkiver: Two more things come to mind: setting and adjusting request delays, and easier compilation including different zstd versions (which I know you're already aware of).
-
JAA
Actually, I guess delays can already be done via the existing hooks.
-
JAA
But if we count that, reading and applying ignores from a file is also possible.
-
arkiver
JAA: on zstd versions - just install a different zstd version and compile against that?
-
arkiver
with*
-
JAA
arkiver: Probably yes, but needs auditing that it actually works correctly across some reasonable range of versions.
-
arkiver
right yeah
-
JAA
(Ideally with a proper test suite on the upcoming CI.)
-
arkiver
well i made this yesterday
ArchiveTeam/wget-lua #15
-
JAA
Yeah, I saw, that's why I added those brackets above. :-)
-
arkiver
test suites are not strong suite
-
arkiver
but yeah i guess
-
JAA
Yeah, don't worry about that yet, CI needs to be running first anyway. It shouldn't be too difficult to just do some simple integration tests on it, retrieving a couple pages with and without a custom dict, verifying that the produced file has one frame per record etc.
-
arkiver
yep
-
JAA
But it would allow us to continuously test against newly released zstd versions, which would be nice.
-
arkiver
that sounds good
-
arkiver
indeed!
-
arkiver
i've never looked much into all the github automation (testing/building/whatever), so some examples and help there would be welcome
-
JAA
GitHub Actions is meh, we'll have something self-hosted soon.
-
arkiver
(github automation is the wrong word - i mean git repo hosting services automation i guess)
-
JAA
The logs are not publicly accessible and get wiped after a couple months.
-
arkiver
right
-
JAA
Very annoying when you come across an old open issue about 'something went wrong in this run: <link>' which just goes 404.
-
JAA
Summary of how that automation works: webhook notifies the CI, which then pulls from GitHub and runs whatever's configured, reporting the status back to GitHub so it can display pass/fail.
-
JAA
Anyway, soon™!
-
arkiver
well let's get something up when we reach that point
-
arkiver
meanwhile i still have to release proper FTP archiving support for Wget-AT
-
arkiver
... which is only stuck on that FTP conversation record order thing
-
arkiver
so will make a decision on that this week and just push it out
-
arkiver
in short - going purely with WARC specs we'll need a third record to note order of FTP conversation records, if we don't go with the WARC specs we'll only need a new header
-
arkiver
i'm leaning towards going with following WARC specs
-
arkiver
(not making up a new header)
-
JAA
-dev?
-
arkiver
copied to there
-
fireonlive
ooh there's a -dev
-
fireonlive
sounds nerdy and fun
-
h2ibot
TheTechRobo edited ArchiveTeam Warrior (+81, Clarify that "Project code is out of date"…):
wiki.archiveteam.org/?diff=49883&oldid=49631
-
EvanBoehs|m
Heya. Many months ago, I downloaded about a million posts and 500k comments from an old, public forum (est. 2008) who's admin hasn't been active for years. It's exhaustive as of 3 months ago. I've been storing it in a local PgSQL database on my computer, but I don't trust myself to do that responsibly... My computer's falling apart. What is the proper way to offload this (where can it be stored, what format should it be stored in, ect)
-
pokechu22
Probably the best thing to do is upload whatever you have to archive.org, even if it's not the most convenient format, so that even if something goes wrong at least something's available
-
pokechu22
as in, just upload the database to archive.org and put it in the community data collection (I assume PgSQL database files are fairly portable?)
-
EvanBoehs|m
pokechu22: No, but conversion to sqlite is possible, and I wouldn't upload it raw anyway because the API is horrendous and exposes IPs
-
EvanBoehs|m
* the API that I pulled from is horrendous
-
nicolas17
also is the forum still up?
-
EvanBoehs|m
nicolas17: Yes it is, I was just worried about it
-
pokechu22
Oh, one thing that would also be useful is if you could extract imgur links for our imgur archival project (see
wiki.archiveteam.org/index.php/Imgur) - if you can get a list of things that look vaguely like imgur links then we can feed them to a bot that will queue valid links for archival
-
EvanBoehs|m
pokechu22: I can do that for you :)
-
EvanBoehs|m
Though I doubt there's much imgur links
-
nicolas17
EvanBoehs|m: in that case, in addition to your download, maybe we can archive it (again) in website form so it's usable on the wayback machine
-
pokechu22
-
EvanBoehs|m
nicolas17: Hmm. Maybe... Is it easier just to reconstruct the page for each URL? That would be trivial enough
-
nicolas17
no, wayback machine wants the exact http response from the server for it to count as preservation
-
EvanBoehs|m
Got it. And as a result of this preservation, anyone can enter URLs into the wayback machine?
-
EvanBoehs|m
Like the "save page" does but in bulk
-
EvanBoehs|m
* wayback machine to see the historic pages?
-
pokechu22
We've got archivebot which lets us recurse over pages on a site and then everything that saves ends up in the wayback machine
-
pokechu22
but, files (in the WARC format) saved by random people generally aren't added to web.archive.org. If you've got a list of URLs though there are tools that can save each URL in the list and then that will end up on the wayback machine
-
EvanBoehs|m
pokechu22: Oh interesting, so you guys have special permission
-
EvanBoehs|m
* special permission?
-
pokechu22
Yeah
-
EvanBoehs|m
I see. It's probably easier because I have a list of all the URLs with content (like the imgur project) as opposed to recursively downloading something like 910000 pages. I'd quite like them all in the way back machine, so would the best first step be to program a warrior or...
-
gjhgf
Sex slave