• 0 Posts
  • 36 Comments
Joined 1 year ago
cake
Cake day: November 19th, 2023

help-circle


  • I ran out of crtcs, but I wanted another monitor. I widened a virtual display, and drew the left portion of it on one monitor, like regular. Then I had a crown job that would copy chunks of it into the frame buffer of a USB to DVI-d adapter. It could do 5 fps redrawing the whole screen, but I chose things to put there where it wouldn’t matter too much. The only painful thing was arranging the windows on that monitor, with the mouse updating very infrequently, and routinely being drawn 2 or more places in the frame buffer.



  • I think we’re still headed up the peak of inflated expectations. Quantum computing may be better at a category of problems that do a significant amount of math on a small amount of data. Traditional computing is likely to stay better at anything that requires a large amount of input data, or a large amount of output data, or only uses a small amount of math to transform the inputs to the outputs.

    Anything you do with SQL, spreadsheets, images, music and video, and basically anything involved in rendering is pretty much untouchable. On the other hand, a limited number of use cases (cryptography, cryptocurrencies, maybe even AI/ML) might be much cheaper and fasrer with a quantum computer. There are possible military applications, so countries with big militaries are spending until they know whether that’s a weakness or not. If it turns out they can’t do any of the things that looked possible from the expectation peak, the whole industry will fizzle.

    As for my opinion, comparing QC to early silicon computers is very misleading, because early computers improved by becoming way smaller. QC is far closer to the minimum possible size already, so there won’t be a comparable, “then grow the circuit size by a factor of ten million” step. I think they probably can’t do anything world shaking.





  • Modern operating systems have made it take very little knowledge to connect to WiFi and browse the internet. If you want to use your computer for more than that, it can still take a longer learning process. I download 3D models for printing, and wanted an image for each model so I could find things more easily. In Linux, I can make such images with only about a hundred characters in the terminal. In Windows, I would either need to learn powershell, or make an image from each file by hand.

    The way I understand “learning Linux” these days is reimagining what a computer can do for you to include the rich powers of open source software, so that when you have a problem that computers are very good at, you recognize that there’s an obvious solution on Linux that Windows doesn’t have.







  • What we have done is invented massive, automatic, no holds barred pattern recognition machines. LLMs use detected patterns in text to respond to questions. Image recognition is pattern recognition, with some of those patterns named things (like “cat”, or “book”). Image generation is a little different, but basically just flips the image recognition on its head, and edits images to look more like the patterns that it was taught to recognize.

    This can all do some cool stuff. There are some very helpful outcomes. It’s also (automatically, ruthlessly, and unknowingly) internalizing biases, preferences, attitudes and behaviors from the billion plus humans on the internet, and perpetuating them in all sorts of ways, some of which we don’t even know to look for.

    This makes its potential applications in medicine rather terrifying. Do thousands of doctors all think women are lying about their symptoms? Well, now your AI does too. Do thousands of doctors suggest more expensive treatments for some groups, and less expensive for others? AI can find that pattern.

    This is also true in law (I know there’s supposed to be no systemic bias in our court systems, but AI can find those patterns, too), engineering (any guesses how human engineers change their safety practices based on the area a bridge or dam will be installed in? AI will find out for us), etc, etc.

    The thing that makes AI bad for some use cases is that it never knows which patterns it is supposed to find, and which ones it isn’t supposed to find. Until we have better tools to tell it not to notice some of these things, and to scrub away a lot of the randomness that’s left behind inside popular models, there’s severe constraints on what it should be doing.



  • Cost is obviously a big factor. Almost every printer can change to any nozzle size and layer height for just the cost of the nozzle. Print volume is a major limitation, depending on your use case. The filaments it can print will probably be the same across any relatively low cost printers, with the only significant change being direct drive vs. Bowden.

    Bed leveling is huge, and makes probably the most difference in print quality on low cost printers these days. If there’s an easy way to tension the belts, that’s a plus. If there isn’t a power switch on the front (or even if there is), a emergency stop switch can be a help, like if the nozzle is running into the bed.

    Maintenance varies from printer to printer, generally you’re aiming for tight but not too tight on any belts or rollers. If the pulleys on the motors aren’t preinstalled, use something like loctite blue to fix them in place better.

    Also make sure if you plan to buy a printer that it’s got a decent amount of community around it. Running into the same problems with a bunch of other people is a big plus as a beginner, so popular printers are better.

    Teaching Tech made a calibration guide website that I’ve had a lot of good experiences with.