• 3 Posts
  • 34 Comments
Joined 2 years ago
cake
Cake day: July 28th, 2023

help-circle
rss






  • Don’t seem to be any disk reads on request at a glance, though that might just be due to read caching on OS level. There’s a spike on first page refresh/load after dropping the read cache, so that could indicate reading the file in every time there’s a fresh page load. Would have to open the browser with call tracing to be sure, which I’ll probably try out later today.

    For my other devices I use unbound hosted on the router, so this is the first time encountering said issue for me as well.


  • You’re using software to do something it wasn’t designed to do

    As such, Chrome isn’t exactly following the best practices either – if you want to reinvent the wheel at least improve upon the original instead of making it run worse. True, it’s not the intended method of use, but resource-wise it shouldn’t cause issues – at this point one would’ve needed active work to make it run this poorly.

    Why would you even think to do something like this?

    As I said, due to company VPN enforcing their own DNS for intranet resources etc. Technically I could override it with a single rule in configuration, but this would also technically be a breach of guidelines as opposed to the more moderate rules-lawyery approach I attempt here.

    If it was up to me the employer should just add some blocklist to their own forwarder for the benefit of everyone working there…

    But guess I’ll settle for local dnsmasq on the laptop for now. Thanks for the discussion 👌🏼


  • TLDR: looks like you’re right, although Chrome shouldn’t be struggling with that amount of hosts to chug through. This ended up being an interesting rabbit hole.

    My home network already uses unbound with proper blocklist configured, but I can’t use the same setup directly with my work computer as the VPN sets it’s own DNS. I can only override this with a local resolver on the work laptop, and I’d really like to get by with just systemd-resolved instead of having to add dnsmasq or similar for this. None of the other tools I use struggle with this setup, as they use the system IP stack.

    Might well be that chromium has a bit more sophisticated a network stack (than just using the system provided libraries), and I remember the docs indicating something about that being the case. In any way, it’s not like the code is (or should be) paging through the whole file every time there’s a query – either it forwards it to another resolver, or does it locally, but in any case there will be a cache. That cache will then end up being those queried domains in order of access, after which having a long /etc/hosts won’t matter. Worst case scenario after paging in the hosts file initially is 3-5 ms (per query) for comparing through the 100k-700k lines before hitting a wall, and that only needs to happen once regardless of where the actual resolving takes place. At a glance chrome net stack should cache queries into the hosts file as well. So at the very least it doesn’t really make sense for it to struggle for 5-10 seconds on every consecutive refresh of the page with a warm DNS cache in memory…

    …or that’s how it should happen. Your comment inspired me to test it a bit more, and lo: after trying out a hosts file with 10 000 000 bogus entries chrome was brought completely to it’s knees. However, that amount of string comparisons is absolutely nothing in practice – Python with its measly linked lists and slow interpreter manages comparing against every row in 300 ms, a crude C implementation manages it in 23 ms (approx. 2 ms with 1 million rows, both a lot more than what I have appended to the hosts file). So the file being long should have nothing to do with it unless there’s something very wrong with the implementation. Comparing against /etc/hosts should be cheap as it doesn’t support wildcard entires – as such the comparisons are just simple 1:1 check against first matching row. I’ll continue investigating and see if there’s a quick change to be made in how the hosts are read in. Fixing this shouldn’t cause any issues for other use cases from what I see.

    For reference, if you want to check the performance for 10 million comparisons on your own hardware:

    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>
    #include <sys/time.h>
    
    
    int main(void) {
    	struct timeval start_t;
    	struct timeval end_t;
    
    	char **strs = malloc(sizeof(char *) * 10000000);
    	for (int i = 0; i < 10000000; i++) {
    		char *urlbuf = malloc(sizeof(char) * 50);
    		sprintf(urlbuf, "%d.bogus.local", i);
    		strs[i] = urlbuf;
    	}
    
    	printf("Checking comparisons through array of 10M strings.\n");
    	gettimeofday(&start_t, NULL);
    
    	for (int i = 0; i < 10000000; i++) {
    		strcmp(strs[i], "test.url.local");
    	}
    
    	gettimeofday(&end_t, NULL);
    
    	long duration = (end_t.tv_usec - start_t.tv_usec) / 1000;
    	printf("Spent %ld ms on the operation.\n", duration);
    
    	for (int i = 0; i < 10000000; i++) {
    		free(strs[i]);
    	}
    	free(strs);
    }
    




  • Yep, got myself a Jääkäri S after getting fed up with backpacks breaking all the time. This time it actually seems like it can stand up to the test of time and lugging two laptops around everywhere.

    Whatever the brand, one thing to keep in mind is the material. Nylon (polyamide) can take much more abuse than e.g. polyester. Good if the bag bottom is as continuous as possible instead of being held up by seams. Savotta also adds reinforcement on the bottom so it doesn’t wear as much from weight.

    If you happen to be in Finland it’s Jääkäri S currently on sale in Motonet for 90 € – not sure if they ship elsewhere in Europe though.





  • Per text and per minute plans were the norm at least here for a long time, I had one until mid 2010’s IIRC. A single text cost something like 0.069 €. Parents kept their kids from overspending with prepaid plans, which were the norm for elementary students. In Europe people typically don’t pay to receive calls, so your parents could still call you even if you ran out of phone credits.

    We got unlimited data plans before widespread unlimited texting, which meant people mostly stopped texting by early 2010’s. I remember my phone plan getting unlimited 3g in 2010 for 0.99 €/month (approx 1.40 $ back then), albeit slow AF (256 kbps). Most switched to e.g. Kik or later WhatsApp after that.


  • Probably varies a lot based on where you grew up. I got my first phone when I was 9, in 2006, and was among the last in my class to get one. Though phone plans were really cheap by then in Finland, partially due to the largest phone manufacturer (back then) Nokia being Finnish, and our telecom operators being in tight competition. (We’ve three separate carriers with country wide networks, as was the case back in the early 2000’s as well)

    I’d say the turning point here was 2003 when Nokia launched the model 1100, which was dirt cheap. I vaguely remember the price eventually falling as low as 19 € in a sale, at which point the phone cost about the same as your typical phone plan per month.


  • Yep, that’s a bit of a sketchy thing, and probably indeed has to do with marketing and getting more funding. Overhyping their quantum stuff might also have something to do with them trying to hide the poor image of their latest AI “achievements”.

    But I’m mainly worried all these companies crying wolf will cause people in relevant fields to push back on implementing quantum-proof encryption – multiple companies are making considerable progress with quantum computing and it’s not a threat to be ignored.


  • There’s still noticeable incremental progress, and since liboqs is out now, and the first somewhat quantum-proof algorithms are out with working initial implementations, I see no reason why you wouldn’t want to move to a hybrid solution for now, just in case. Especially with more sensitive data like communication, healthcare and banking.

    Just encapsulate the current asymmetric stuff with oqs, e.g. ed25519 inside LM-KEM. That way you’ll have an added layer of security on top of the oqs implementation just in case there are growing pains, and due to the library not yet passing audits and as it’s yet to be fully peer-reviewed.

    Cryptography has to be unbreakable for multiple decades, and the added headroom is a small price to pay for future security. Health data e.g. can have an impact on a person even 30 years later, so we have a responsibility to ensure this data can’t be accessed without authorization even that far in the future. No one can guarantee it’ll not be possible, but we should at least make our best effort to achieve that.

    Have we really not gotten past shooting ourselves in the foot collectively with poor security planning, even AWS was allowing SHA-1 signatures for authentication as recently as 2014, over a decade after it was deemed to be insecure. Considering how poorly people do key management it’s feasible to expect there are old AWS-style requests with still working keys to be brute-forced out.

    No, we don’t have working quantum computers that threaten encryption now. Yes, it is indeed feasible this technology matures in the next 30 years, and that’s the assumption we need to work with.


  • Not sure about others in fennoscandia, but at least Finland has multiple large co-ops. One of the largest banks, OP ( literally named co-op bank) is a co-op which many own a part of. Many of my friends are part of the co-op.

    Also, Finland’s largest retail conglomerate (with 48.3 % market share of retail in Finland) is a consumer co-op, which is also causing a very difficult situation for all other businesses in retail, as they’re able to undercut practically everyone since they have less of a profit incentive. 2.4 million people have a membership, which is quite a sizable amount in a country of under 6 million (though I’m not sure if the number includes Estonians as well)