I’ve spent some time searching this question, but I have yet to find a satisfying answer. The majority of answers that I have seen state something along the lines of the following:
- “It’s just good security practice.”
- “You need it if you are running a server.”
- “You need it if you don’t trust the other devices on the network.”
- “You need it if you are not behind a NAT.”
- “You need it if you don’t trust the software running on your computer.”
The only answer that makes any sense to me is #5. #1 leaves a lot to be desired, as it advocates for doing something without thinking about why you’re doing it – it is essentially a non-answer. #2 is strange – why does it matter? If one is hosting a webserver on port 80, for example, they are going to poke a hole in their router’s NAT at port 80 to open that server’s port to the public. What difference does it make to then have another firewall that needs to be port forwarded? #3 is a strange one – what sort of malicious behaviour could even be done to a device with no firewall? If you have no applications listening on any port, then there’s nothing to access. #4 feels like an extension of #3 – only, in this case, it is most likely a larger group that the device is exposed to. #5 is the only one that makes some sense; if you install a program that you do not trust (you don’t know how it works), you don’t want it to be able to readily communicate with the outside world unless you explicitly grant it permission to do so. Such an unknown program could be the door to get into your device, or a spy on your device’s actions.
If anything, a firewall only seems to provide extra precautions against mistakes made by the user, rather than actively preventing bad actors from getting in. People seem to treat it as if it’s acting like the front door to a house, but this analogy doesn’t make much sense to me – without a house (a service listening on a port), what good is a door?
How would something like this be normally accomplished? I know that Firewalld has the ability to select a zone based on the connection, but, if I understand correctly, I think this is decided by the Firewalld daemon, rather than the packet filtering firewall itself (e.g. nftables). I don’t think an application layer firewall would be able to differentiate networks, so I don’t think something like OpenSnitch would be able to control this, for example.
What would be a better alternative that you would suggest?
The unfortunate thing about this – and I have encountered this personally – is that some networks may block VPN related traffic. You can take measures to attempt to obfuscate the VPN traffic from the network, but it is still a potential headache that could lock you out of using your service.
Mmh, I probably was way to vague with that. This is done by something like FirewallD or whatever Windows or MacOS uses for this. AFAIK it then uses packet filtering to accomplish the task. Seems FirewallD includes the packet filtering too and not tie into nftables and transfer the filtering task to that. I don’t think OpenSnitch does things like that. I’m really not an expert on firewalls. I could be wrong. If you read the Wikipedia article (which isn’t that good) you’ll see there are at least 3 main types of firewall, probably more sub-types and a plethora of different implementations. Some software does more than one of the things. And everything kinda overlaps. Depending on the use-case you might need more than just one concept like packet-filtering. Or connect different software, for example detect which network was connected to and re-configure the packet filter. Or like fail2ban: read the logfiles with one piece of software and hand the results to the packet filter firewall and ban the hackers.
I don’t really know how the network connection detection is accomplished and manages the firewall. Either something pops up and I click on it, or it doesn’t. My laptop has just 3 ports open, ssh, ipp (printing) and mdns. I haven’t felt the need to address that and care about a firewall on that machine. But I’ve made mistakes. I had MDNS or Bonjour or whatever automatically shows who is on the network and which services they offer activated and it showed some of the Apple devices at work and I didn’t intend to show up in anyone’s chat with my laptop or anything. And at one point I forgot to deactivate a webserver on my laptop. I had used that to design a website and then forgotten about. Everyone in the local networks I’ve connected to in that time could have accessed that and depending on where I was that could have made me mildly embarassed. But no-one did and I eventually deleted the webserver. I think I’ve been living alright without caring about a firewall on my private laptop. I could have prevented that hypothetical scenario by using a firewall that detects where I’m at, but far more embarassing stuff happens to other people. Like people changing their name and then Airdropping silly stuff to people who are just holding a lecture, or Skype popping up while their screen is mirrored to the beamer infront of a large audience. But that has nothing to do with firewalls. Also, in the old days every Windows and network share was displayed on the whole network anyways. Nothing ever happened to me. And while I think that is not a good argument at all, I feel protected enough by using the free software I do and roughly knowing how to use a computer. I don’t see a need to install a firewall just to feel better. Maybe that changes once my laptop is cluttered and I lose track of what software opens new ports.
On my server I use nftables. Drop everything and specifically allow the ports that I want to be open. In case I forget about an experiment or configure something entirely wrong (which also has happened) it adds a layer of protection there. I handle things differently because the server is directly connected to the internet and targeted, and my laptop is behind some router or firewall all the time. Additionally, I configured fail2ban and configured every service so it isn’t susceptible to brute-forcing the passwords. I’m currently learning about Web Application Firewalls. Maybe I’ll put ModSecurity in-front of my Nextcloud. But it should be alright on it’s own, I keep it updated and followed best practices when setting it up.
I really don’t have a good answer to that. Separating your various assortment of IoT devices from the rest of the network is probably a good idea. I personally would stop at that. I wouldn’t install cameras inside of my house and not buy an Alexa. I have a few smart lightbulbs and 2 thermostats, they communicate via Zigbee (and not Wifi), so that’s my separate network. And I indeed have a few Wifi IoT devices, a few plugs and an LED-strip. I took care to buy ones where I could hack the firmware and flash Tasmota or Esphome on them. So they run free software now and don’t connect to some manufacturers cloud. And I can keep them updated and hopefully without security vulnerabilities indefinitely, despite them originally being really cheap no-name stuff from china.
You can also set up a guest Wifi (for your guests) if you want to. I recently did, but didn’t bother to do it for many years. I feel I can trust my guests, we’re old enough now and outgrew the time when it was funny to mess with other people’s stuff, set an alarm to 3am or change the language to arabic. And all they can do is use my printer anyways. So I usually just give my wifi password to anyone who asks.
However, what I do might not be good advice for other people. I know people who don’t like to give their wifi credentials to anyone, since it could be used to do illegal stuff over the internet connection. That would backfire on who owns the internet connection and they’d face the legal troubles. That will also happen if it’s a guest wifi. I’m personally not a friend of that kind of legislation. If somebody uses my tools to commit a crime, I don’t think I should be held responsible for that. So I don’t participate in that fearmongering and just share my tools and internet connection anyways.
(And you don’t absolutely need to put in all of that effort at home. Companies need to do it, since sending all the employers home and then paying 6 figures to another company to analyze the attack and restore the data is very expensive. At home you’re somewhat unlikely to get targeted directly. You’ll just be probed by all the stuff that scans for vulnerable and old IoT devices, open RDP connections, SSH, insecure webservers and badly configured telephony boxes. Your home wifi router will do the bare minimum and the NAT on it will filter that out for you. Do Backups, though.)
That’s a bummer. There is not much you can do except obfuscate your traffic. Use something that runs on port 443 and looks like https (i think that’d be a TCP connection) or some other means of obfuscating the traffic. I think there are several approaches available.
Firewalld is capable of this – it can switch zones depending on the current connection.
There does still exist the risk of a vulnerability being pushed to whatever software that you use – this vulnerability would be essentially out of your control. This vulnerability could be used as a potential attack vector if all ports are available.
Interesting! I haven’t heard of this. Side note, out of curiosity, how did you go about installing your Nextcloud instance? Manual install? AIO? Snap?
It would be a rather difficult thing to prove – one could certainly just make the argument that you did, in that someone else that was on the guest network did something illegal. I would argue that it is most likely difficult to prove otherwise.
But this is a really difficult thing to protect from. If someone gets to push code on my computer that gets executed, I’m entirely out of luck. It could do anything that that process is allowed to do, send data, mess with my files and databases or delete stuff. I’m far more worried about the latter. Sandboxing and containerization are ways to mitigate for this. And it’s the reason why I like Linux distributions like Debian. There’s always the maintainers and other people who use the same software packages. If somebody should choose to inject malicious code into their software, or it gets bought and the new company adds trackers to it, it first has to pass the (Debian) maintainers. They’ll probably notice once they prepare the update (for Debian). And it gets rolled out to other people, too. They’ll probably notice and file a bugreport. And I’m going to read it in the news, since it’s something that rarely happens at all on Linux.
On the other hand it could happen not deliberately but just be vulnerable software. That happens and can be exploited and is exploited in the real world. I’m also forced to rely on other people to fix that before something happens to me. Again sandboxing and containerization help to contain it. And keeping everything updated is the proper answer to that.
What I’ve seen in the real world is a CMS being compromised. Joomla had lots of bugs and Wordpress, too. If people install lots of plugins and then also don’t update the CMS, let it rot and don’t maintain the server at all, after like 2 years(?) it can get compromised. The people who constantly probe all the internet servers will at some point find it and inject something like a rootkit and use the server to send spam, or upload viruses or phishing sites to it. You can pay Cloudflare $200 a month and hope they protect you from that, or use a Web Application Firewall and keep that up-to-date yourself, or just keep the software itself up-to-date. If you operate some online-services and there is some rivalry going on, it’s bound to happen faster. People might target your server and specifically scan that for vulnerabilities way earlier than the drive-by attacks get a hold of it. Ultimately there is no way around keeping a server maintained.
I have two: YunoHost powers my NAS at home. It contains all the big files and important vacation pictures etc. YunoHost is an AIO solution(?), an operating system based on Debian that aims at making hosting and administration simple and easy. And it is. You don’t have to worry too much to learn how to do all of the stuff correctly, since they do it for you. I’ve looked at the webserver config and so on and they seem to follow best practices, disallow old https ciphers, activate HSTS and all the stuff that makes cross site scripting and such attacks hard to impossible. And I pay for a small VPS. I used docker-compose and Docker on it. Read all the instructions and configured the reverse proxy myself. I also do some experimentation there in other Docker containers, try new software… But I don’t really like to maintain all that stuff. Nextcloud and Traefik seem somewhat stable. But I have to regularly fiddle with some of the other docker-compose files of other projects that change after a major update. I’m currently looking for a solution to make that easier and planning to rework that server. And then also run Lemmy, Matrix chat and a microblogging platform on it.
And it depends on where you live and the legislation there. If someone downloads some Harry Potter movies or uses your Wifi to send bomb threats to their school… They’ll log the IP and then contact the ISP and the Internet Service Provider is forced to tell them your name. You’ll get a letter or a visit from police. If they proceed and sue you, you’ll have to pay a lawyer to defend yourself and it’s a hassle. I think I’d call it coercion, but even if you’re in the right, they can temporarily make your life a misery. In Germany, we have the concept of “Störerhaftung” on top. Even if you’re not the offender yourself, being part of a crime willingly (or causally adequate(?))… You’re considered a “disruptor” and can be held responsible, especially to stop that “disruption”. I think it was meant get to people who technically don’t commit crimes themselves, they just deliberately enable other people to do it. For some time it got applied to WiFi here. The constitutional court had to rule and now I think it doesn’t really apply to that anymore. It’s complicated… I can’t sum it up in a few sentences. Nowadays they just send you letters, threatening to sue you and wanting a hundred euros for the lawyer who wrote the letter. They’ll say your argument is a defensive lie and you did it. Or you need to tell them exactly who did it and rat out on your friends/partner/kids or whoever did it. Of course that’s not how it works in the end but they’ll try to pressure people and I can imagine it is not an enjoyable situation to be in. I’ve never experienced it myself, I don’t download copyrighted stuff from the obvious platforms that are bound to get you in trouble and neither does anyone else in my close group of friends and family.
Not necessarily. An application layer firewall, for example, could certainly get in the way of it trying to send data externally.
Are you referring to a service leaving a port open that can be connected to from the network?
I’m definitely curious about the outcome of this – Matrix especially. Perhaps the new/alternative servers function a bit better now, but I’ve heard that, for synapse at least, Matrix can be very demanding on hardware to run (from what I’ve heard, the issues mostly arise when one joins a larger server).
Interesting. Do you mean “held responsible” to simply stop the disruption, or “held responsible” for the actions of/damaged caused by the disruption?
I think an Application Layer Firewall usually struggles to do more than the utmost basics. If for example my Firefox were to be compromised and started not only talking to Firefox Sync to send the history to my phone, but also send my behavior and all the passwords I type in to a third party… How would the firewall know? It’s just random outgoing encrypted traffic from its perspective. And I open lots of outbound connections to all kinds of random servers with my Firefox. Same applies to other software. I think such firewalls only protect you once you run a new executable and you know it has no business sending data. If software you actually use were susceptible to attack, the firewall would need to ask you after each and every update of Firefox if it’s still okay and you’d really need to verify the state of your software. If you just click on ‘Allow’ there is no added benefit. It could protect you from connecting to a list of known malicious addresses and from people smuggling new and dedicated malware to your computer.
I don’t want to say doing the basics is wrong or anything. If I were to use Windows and lots of different software I’d probably think about using an Application Level Firewall. But I don’t see a real benefit for my situation… However I’d like Linux to do some more sandboxing and asking for permissions on the desktop. Even if it can’t protect you from everything and may not be a big leap for people who just click ‘Accept’ for everything, it might be a good direction and encourage more fine-granularity in the permissions and ways software ties together and interacts.
I mean your webserver or CMS or your browser has a vulnerability and that gets exploited and you get hacked. The webserver has open ports anyways in order to be able to work at all. The CMS is allowed to process requests and the browser allowed to talk to websites. A maliciously crafted request or answer to your software can trigger it to fail and do something that it shouldn’t do.
Sure, I have a Synapse Matrix server running on my YunoHost. It works fine for me. I’m going to install Dendrite or the other newer one next. I’m not complaining if I can cut down memory consumption and load to the minimum.
Yeah, the issue was that it meant both. You were part of the crime, you were involved in the causality and linked to the damages somehow. Obviously not to the full extend, since you didn’t do it yourself, but more than ‘don’t allow it to happen again’. Obviously that has consequences. And I think now it’s not that any more when it comes to wifi. I think now it’s just the first, plus they can ask for a fixed amount of money since by your negliect, you caused their lawyer to put in some effort.
If it’s going to some undesirable domain, or IP, then you can block the request for that application. The exact capabilities of the application layer firewall certainly depend on the exact application layer firewall in question, but this is, at least, possible with OpenSnitch.
For the actual content of the traffic, is this not the case with essentially all firewalls? They can’t see the content of te traffic if it is using TLS. You would need to somehow intercept the packet before it is encrypted on the device. I’m not aware of any firewall that has such a capability.
The exact level of fine-grain control heavily depends on the application layer firewall in question.
Interesting.
I do, perhaps, somewhat understand this argument, but it still feels quite ridiculous to me.
I think OpenSnitch can do it roughly 2 different ways. Either you use an allow-list. That’s pretty secure. But it’ll severely interfere with how you’re used to browse the internet. You’re gonna allow Wikipedia and your favorite news sources, but you won’t be browsing Lemmy and just randomly clicking on articles and blogs since you have to specifically allow them in the firewall first. Or you’re using a deny-list. That’s something like what Chrome does, have a list of well-known malicious sites and it’ll ask you ‘Do you really want to visit that site? It spreads malware.’ It’ll add tremendously to security. But won’t protect you entirely. Hackers frequently break into webservers to spread malware from new servers. Ones that aren’t yet in the list of bad IPs. It’ll work for some time until the application firewall and the Chrome browser catches up and they’ll move on to a different server. You should definitely think about that and prevent being the millionths victim, however.
I think we’re talking about vastly different concepts here. Desktop computers and servers, consumers and enterprises are threatened in vastly different ways. And thus they need different solutions that handle the different threats. On a desktop computer the main way of compromising it is getting people to click on something. Or do whatever an official-looking e-mail instructs them to do. On a server that is meaningless. There isn’t that much random applications someone clicks on without thinking it through. There is no e-mail client on the server. But on the other side you’re serving random people from all over the world. Your connections are different, too. And if someone wants to upload their malware somewhere or send spam… They’re going to go for a server and not a desktop computer.
About the “Störerhaftung”: I think so, too. It’s been ridiculous and in the end the courts also ruled it’s against the law. The 100€ is also not something you have to pay. They want it and it’s just a way to settle out of court. If you pay them, they’ll promise to forget about this one time and not care about who did it. I think these kind of settlement exist all around the world and it’s not illegal. And the copyright has to find some means of pressuring people, even if it’s a bit shady, since such copyright offenses aren’t a major crime and courts are often times bothered with more important stuff.