When running OpenVPN in an LXC environment, users may encounter a specific error that prevents the OpenVPN service from operating correctly. The error manifests as follows:
Jan 08 00:56:47 fw openvpn[404]: openvpn_execve: unable to fork: Resource temporarily unavailable (errno=11)
Jan 08 00:56:47 fw openvpn[404]: Exiting due to fatal error
Jan 08 00:56:47 fw systemd[1]: openvpn-client@yourvpn.service: Main process exited, code=exited, status=1/FAILURE
Solution:
To resolve this issue, it’s necessary to edit the OpenVPN service using the systemctl command. Here are the steps to follow:
Edit the OpenVPN Service:
Run the command systemctl edit openvpn-client@
In the editor that opens, add the following lines in the appropriate section (after the comment ### Anything between here and the comment below will become the new contents of the file)
[Service]
LimitNPROC=infinity
Save and close the editor.
Reload the Systemctl Daemon:
Execute systemctl daemon-reload for the changes to take effect.
Restart the OpenVPN Service:
Restart the service with the command systemctl restart openvpn-client@yourvpn, replacing yourvpn with the name of your VPN configuration.
Although this problem frequently occurs in LXC environments using Ubuntu, it may arise in other operating systems or similar configurations. The key to solving the issue lies in adjusting the process limit for the OpenVPN service, allowing it to create the necessary processes for normal operation.
Memory decay in computer memory cells is a growing concern for everyone. This refers to the gradual loss of data stored in a computer’s memory, due to a variety of factors, such as age, temperature, and the number of read-write cycles. This can lead to data loss, corruption, and other issues.
Too many R/W cycles will cause the memory to loss its ability to save new information, but, not using it at all will affect the information that already contains… why?… let’s see:
A memory cell in a computer can be thought of as a tiny capacitor that stores a charge to represent binary data (1 or 0). The charge (electrons trapped there) on the capacitor represents the binary state of the memory cell.
In a computer memory cell, the state of the charge on the capacitor represents the data stored in the cell. When a voltage is applied to one of the plates, it changes the charge on the capacitor, representing a change in the binary state of the cell.
To read the data stored in a memory cell, the voltage on the plates is measured. If the voltage is above a certain threshold, the cell is considered to be storing a 1; if the voltage is below the threshold, the cell is considered to be storing a 0.
Over time, the charge on the capacitor can leak away, leading to memory decay and the gradual loss of data stored in the cell.
Backups:
If you do a backup on a SD card and leave it for several years, there is a chance that the data stored on the card may experience some decay over time, depending on various factors such as temperature, humidity, and the number of read-write cycles done to the memory. However, reading the memory every few months can help to reduce this decay (with a small cost).
When you read the memory on the SD card, it activates the memory cells and applies a voltage to them, which can recharge the capacitors and help to maintain the data stored in the cells. This is because reading the memory refreshes the charge on the capacitors and helps to prevent the charge from leaking away over time.
It’s important to note that this mechanism is not foolproof, and the data stored on the SD card may still experience some decay over time, even with regular reading. However, regularly reading the memory can help to minimize the risk of decay and extend the lifespan of the data stored on the card.
How to read/refresh the whole memory:
Suppose that you don´t need to read this memory so often because this memory is just for keeping your old documents… but you still want to preserve them for different purposes (e.g. backups).
You only need to do this:
pv /dev/sdx > /dev/null
changing /dev/sdX by your device, you can explore what device you want to refresh using the lsblk command
There are many discussions about to use or not a password manager. However, most experts agree that you must use a strong password in every system/service (+2fa, but we are not going to elaborate about it today).
So to create a strong password, you need:
A decent password length (eg. >14 characters), old articles recommends 8, however, if they are not produced by a random generator, they are vulnerable to some statistical attacks (eg. Markov chains)
Use non dictionary based words (No dates on it, no ending numbers like qwerty99 or qwerty01, no dictionary based words)
Use Alphanumeric and special characters.
Not to use l33t transformations (P4ssw0rd!)
Even is a 100% secure password, please don’t reuse the password between systems/services
And don’t share segments of the password between systems (like: p4s%sW0rdGMAIL, p4s%sW0rdFACEBOOK) or (like: p4s%sW0rd2002, p4s%sW0rd2003) or (like: p4s%sW0rd01, p4s%sW0rd02) or anything like that
I don’t have to mention that reusing a password is extremely dangerous… even if the service is 100% PCI compliant, that does not mean that this is 100% hacker-proof. If you still believe that there is no need to be alarmed, try searching yourself in https://haveibeenpwned.com/
So, the secure alternative is to use different passwords like this in every service: R@mf8909%3ZA2111, D2mH!8u7s95s4, @#$%aei54mk!36644s
The question:
Are people capable of remembering every password for 100 different services?
The answer is, most people can’t. Most people can only remember 1 or 2 secure passwords and usually are MyPuppyName2022!
So, this is usually the reason behind we use password managers (and tokens like yubikey’s or 2fa)….
Are password managers secure?
Well, the problem is widely discussed everywhere, if the password manager fails, everything goes down with it… so If you are capable enough to create secure passwords, remember every password and rotate them every few months, you should not be using this… if not, it’s a decent option.
The other problem about the password manager is the clipboard…
many users usually copy the password from the password manager using the clipboard, and if you are compromised even with an unprivileged application, even in the future, your password may going to be available in the memory and can be recovered by this application.
And if you think that viewing and copying the password from the screen is a good idea: no is not… it may be leaked with an USB physical keylogger, or simply taking a picture from your screen (or maybe some “advanced” tempest screen radiation recovery)
So, password managers like KeePassXC have a very nice option to avoid all of this: “Perform Auto-Type‘
This option will type the password straight to the program that is requesting the password, it’s not perfect but it’s pretty decent and simulate the keyboard input…
This is a simple/short how to for installing VirtualBox 7 in OpenSUSE 15
Here we are handling two problems:
There is no repo for OpenSUSE 15.4 (we need to do a trick)
There is no documentation on how to create proper UEFI secure boot MOK’s (owner keys) for the newer OpenSUSE which demands that the key will have special attributes like “codeSigning“
so, here is the answer:
Step 1: basic instalation:
# Get/Install the repo...
wget https://download.virtualbox.org/virtualbox/rpm/opensuse/virtualbox.repo -O /etc/zypp/repos.d/virtualbox.repo
# Workaround (there is no repo for 15.4, but 15.3 works fine)
sed -i 's/$releasever/15.3/g' /etc/zypp/repos.d/virtualbox.repo
# Install VBox7.0 (and accept the certificate) and kernel build tools
# Repository: VirtualBox for openSUSE 15.3 - x86_64
# Key Fingerprint: 7B0F AB3A 13B9 0743 5925 D9C9 5442 2A4B 98AB 5139
zypper install VirtualBox-7.0 kernel-default-devel
# add your user to group (replace myuser):
usermod -aG vboxusers myuser
Step 2: MOK Key Creation (Only For UEFI+Secure Boot Systems):
mkdir -p /var/lib/shim-signed/mok
openssl req -nodes -new -x509 -newkey rsa:4096 -addext "extendedKeyUsage = codeSigning" -outform DER -keyout /var/lib/shim-signed/mok/MOK.priv -out /var/lib/shim-signed/mok/MOK.der
# here, use a random two-use password for enrolling the key
mokutil --import /var/lib/shim-signed/mok/MOK.der
reboot
# In the EFI MOK Utility...
# Enroll the key... the password is the password that you entered in mokutil, you won't be asked again about this password.
Step 3: Install the extension pack
# Installing the extension pack (you can re-use this every time after you do zypper up):
VBOXVERSION=$(rpm -qa VirtualBox* | cut -d'-' -f3 | cut -d_ -f1)
wget "https://download.virtualbox.org/virtualbox/${VBOXVERSION}/Oracle_VM_VirtualBox_Extension_Pack-${VBOXVERSION}.vbox-extpack"
VBoxManage extpack install --replace "Oracle_VM_VirtualBox_Extension_Pack-${VBOXVERSION}.vbox-extpack"
Now, install the following repos (depending on your needs):
# for google chrome:
zypper ar --refresh https://dl.google.com/linux/chrome/rpm/stable/x86_64 Google-Chrome
# for google repos:
wget https://dl.google.com/linux/linux_signing_key.pub
rpm --import linux_signing_key.pub
# For security software:
zypper ar --refresh https://download.opensuse.org/repositories/security/15.4/security.repo
# Graphics:
zypper ar --refresh https://download.opensuse.org/repositories/graphics/15.4/graphics.repo
# For Snapd:
zypper ar --refresh https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_15.4 snappy
# For Codecs...
zypper ar --refresh -cfp 90 -n Packman https://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Leap_15.4/ packman
Then, you may want to install snapd (many software is available trough snapd), proceed as follows, execute each one by one:
zypper install suse-prime bbswitch-kmp-default
zypper remove xf86-video-nouveau
zypper install xf86-video-intel
cd /etc/uefi/certs
for i in *; do mokutil -i $i; done
# This key is not installed by default in new systems, but bbswitch still uses it:
cd /tmp
wget https://rpmfind.net/linux/opensuse/distribution/leap/15.3/repo/oss/x86_64/openSUSE-signkey-cert-20210302-lp153.1.1.x86_64.rpm
rpm2cpio openSUSE-signkey-cert-20210302-lp153.1.1.x86_64.rpm | cpio -idmv
cd /tmp/etc/uefi/certs/
for i in *; do mokutil -i $i; done
reboot
# For switching the graphic card to the Integrated Card (eg. intel), execute as root:
prime-select intel
# now, logout from X and get back...
Most security assessments only includes CVE’s and known vulnerabilities but many fail to address the true potential security risks. And this will create a big problem for your organization.
The problem starts because most organizations only wants to have a security analysis based on know-existent vulnerabilities, like a “tell me what KB to patch”, but this approach is not good and fails to protect you in two ways:
You are not resilient enough for 0-days attacks and undetected vulnerabilities
You will be focused on patching instead of creating meaningful policies that can help you more…
Patching instead of policies…
Well.. Patching itself is a good policy, but specific patch management is a very dangerous game. I would never recommend to install some specific KB to fix a vulnerability found in a report, but… why?
First and foremost, most security assessments, specially black box, are constrained in time, resources and access, and they will only identify a subset of known vulnerabilities, and no, does not matter if you have hired the best hacker, even the best will going to miss many things.
so, you will be asking me now: why are we hiring vulnerability assessments, penetration testings, and so…
The answer is quite simple:
To understand the main issues in your security policy
A meaningful security assessment can help you to create a meaningful security policy that includes periodic updates, strong password management, access controls, and many other things that may be useful for you (and you may not require to test for 6 months to discover what you require).
The security policy and the APT…
When your system misses a patch, your system is obviously out of date and with potential vulnerabilities, but it’s more dangerous than that… savvy external hackers will identify this and will try to take advantage of the policy failure all over the network and over the time, including in places out of the assessment scope.
So, if your security policy is to check the vulnerabilities every 6 months and apply the identified patches in the next 6 months, this will tell the APT the following thing:
There is a patching gap is an indicator that some window of time could be used for exploitation (the first 6 months)
And… most probably there are still many vulnerabilities in place…
So, there is no easy answer to this, because many can’t handle an have an aggressive update schedule. but creating a meaningful/resilient security policy can help very much (even you have not installed the patch)…
Not every fixed vulnerability have a CVE
If you thing that if you are free of CVE, you are ok… I have some bad news…
do you know that when someone develop a software, many of the bugs are not cataloged as security vulnerabilities? Many bugs are reported (and fixed) as stability bugs, and in many times the development team will fail to see that there is a potential security threat there…
Those bugs are used to be fixed in non-security related updates, and if we don’t have a proper update policy, those security issues will be accumulating over time and will be ready to be exploited.
and there is no way to avoid this confusion in development without skyrocketing the development costs… But I believe that there is no way to avoid this confusion (specially in 3rd party and opensource software).
My recommendation: stick to the last, and create a meaningful policy and also contemplate 0-days…
0-days are not uncommon
At this point, many basic security testing are capable of identifying what you are required to improve in your own policy… And applying this to the whole organization will make some success against unidentified vulnerabilities.
And if you want to hire the best testing to document all the unidentified vulnerabilities, I will tell you that 0 days are not uncommon, so… there is no way to identify ALL the vulnerabilities…
But don’t worry, there is hope, many times in my career I had to successfully repeal attacks with 0 days, and the strategy was very simple:
Meaningful Logs: helps you to understand how the system was exploited and learn from it.
Identify the risk and relevance: The risk management is not a CVE, identify those systems that comply and those that don’t comply with our security policy, and assign a relevance/risk level.
Pivot Pivot and Pivot: Try to understand all the possible pivots that may be used by an adversary and try to restrict them.
Expecting them (the zero days): Never underestimate your adversary… Once you admit that even you are in the bleeding edge you may be vulnerable you are able to understand the importance of ->
Having security layers (Onion Strategy): Security layers is the nightmare for a successful attack, it may require to combine and chain several exploits, and it will represent a very hard challenge for the adversary. Our bet is that the hacker will stuck and we will be able to catch him before the maze is resolved.
Reducing the attack surface: if you can understand about hidden risks about 0-days and missed vulnerabilities (even if you patched everything), then, you should work on reducing the attack surface… bigger the attack surface, bigger the probability of a 0-day/missed vulnerability…
And this should be the beginning of a meaningful security policy, but remember, this has to be adapted and complemented by a policy focused security testing!
When you do a security assessment, you need to elaborate some recommendations to mitigate the potential risks. This is one of the most difficult parts, because bad assumptions can easily lead to false sense of security and overspending…
Kubuntu and mostly ubuntu installations comes with a very basic installer, and does not allow you to personalize the encryption, by example, if you have windows and linux together in the same hard drive, the installation won’t allow you to dual boot it, it will force you to use the whole disk, removing the existing windows partition.