Security - Ubuntu: Questions and Answers (2014)

Ubuntu: Questions and Answers (2014)

Security

Skip to questions, Wiki by user kees-cook

Ubuntu has many Security Features, and a Security Team dedicated to keeping users safe and up to date. Please feel free to get involved, or read through the Ubuntu Security FAQ.


Questions

Q: What is the difference between "gksudo nautilus" and "sudo nautilus"?

Tags: security (Next Q), sudo (Next Q), gksu (Next Q)

I've been using gksudo nautilus and sudo nautilus through Alt+F2.

What's the difference? They look very similar!

Tags: security (Next Q), sudo (Next Q), gksu (Next Q)

User: drkenobi


Answer by chris-wilson

Taken from here:

You should never use normal sudo to start graphical applications as root. You should use gksudo (kdesudo on Kubuntu) to run such programs. gksudo sets HOME=/root, and copies .Xauthority to a tmp directory. This prevents files in your home directory becoming owned by root.

Please note that this is primarily about configuration files. If you run Nautilus as root, even with gksu/gksudo, and you create a file or folder anywhere with it (including in your home directory), that file or folder will be owned by root. But if you run Nautilus (or most other graphical applications) as root with sudo, they may save their configuration files in your home directory (rather than root's home directory). Those configuration files may be owned by root and inaccessible when you're not running as root, which can severely mess up your settings, and may even keep some applications from working altogether.

The solution, once you have made this mistake, is to find the configuration files and delete them or chown them back to belonging your non-root user. Many such files start with a . or are contained in a directory that starts with a .. Some are located inside the .config folder in your home directory. To see files and folders that start with a . in Nautilus, press Ctrl+H (this shows hidden files.) To see them with ls, use the -a (or -A) flag.

To find if there are files not owned by you in your home directory, you can use the following command in a terminal:

find $HOME -not -user $USER -exec ls -lad {} \;

which will list all files under the home directory not owned by the user.


Tags: security (Next Q), sudo (Next Q), gksu (Next Q)


Q: Why is it bad to login as root?

Tags: security (Next Q), users (Next Q)

I've often come across posts on forums or other websites where you see people joking in such a manner about running/logging in as root as if it's something awful and everyone ought to know about it. However, there isn't much that a search reveals on the matter. It maybe widely known to Linux experts, but I really don't know why. I remember always running as root when I first tried Linux years ago (Redhat and Mandrake) and don't remember running into any problems because of that. There are actually some distros that have a bright red background with alert signs all over it as wallpaper for the root user (Suse?). I still use the "Administrator" account for regular use on my Windows installation and haven't ever run into any problems either.

Tags: security (Next Q), users (Next Q)

User: mussnoon


Answer by lazypower

It defeats the security model that's been in place for years. Applications are meant to be run with non-administrative security (or as mere mortals) so you have to elevate their privileges to modify the underlying system. For example you wouldn't want that recent crash of Rhythmbox to wipe out your entire /usr directory due to a bug. Or that vulnerability that was just posted in ProFTPD to allow an attacker to gain a ROOT shell.

Its just good practice on any operating system to run your applications on a user level and leave administrative tasks to the root user, and only on a per-need basis.


Answer by vojtech-trefny

Just one word: security.

1. You're logged as root = all applications are running with root privilegies -- every vulnerability in Firefox, Flash, OpenOffice etc. now can destroy your system, because possible viruses now have access everywhere. Yes, there are only few viruses for Ubuntu/Linux, but it's also because of good security and default unprivileged user.

2. It's not only about viruses -- small bug in an application could erase some system files or...

3. When your're logged as root, you can do everything -- the system won't ask! Do you want to format this disk? Ok, just one click and it's done, because you're root and you know what you're doing...


Answer by marlon

Running as root is bad because:

1. Stupidity: Nothing prevents you from doing something stupid. If you try to change the system in anyway that could be harmful you need to do sudo which pretty much guarantees a pause while you are entering the password for you to realize that you are about to make a possible big/costly change.

2. Security: It has been mentioned already quite a few times in this question but basically it's the same thing, harder to hack if you dont know the admin user's login account. root means you already have one half of the working set of admin credentials.

3. You don't really need it: If you need to run several commands as root and you are annoyed by having to enter your password several times when sudo has expired all you need to do is sudo su or sudo bash or whatever your fav shell is and you are now root. Want to run some commands using pipes? Then use sudo sh -c "comand1 | command2".

4. You can always use it in the recovery console: The recovery console allows you to try and recover from doing something stupid or fixing a problem caused by an app (which you still had to run as sudo :)) Ubuntu doesn't have a password for the root account in this case but you can search online for changing that, this will make it harder for anyone that has physical access to your box to be able to do harm.

The reason why you couldn't find information about why it's bad is because, well, there is way too much data in the internet :) and that a lot of people that have been using Linux for a long time think like you do. This way of thinking about the root account is fairly new (a decade maybe?) and a lot of people still get annoyed by having to use sudo. Especially if they are working on a server which means they went in with the intention to make system changes. Probably brought on from previous bad experiences and security standards most sysadmins know better but they still don't like it :).


Tags: security (Next Q), users (Next Q)


Q: Are PPA's safe to add to my system and what are some "red flags" to watch out for?

Tags: security (Next Q), ppa (Next Q)

I see a lot of interesting programs out there that can only be obtained by adding a "PPA" to the system but, if I'm understanding correctly, we should stay within the official "repositories" for adding software to our system. Is there any way for a novice to know if a "PPA" is safe or if it should be avoided?. What tips should the user know about when dealing with a PPA in the first place.

Tags: security (Next Q), ppa (Next Q)

User: rob


Answer by luis-alvarado

PPA (Personal Package Archive) are used to include a specific software to your Ubuntu, Kubuntu or any other PPA compatible distro. The "safeness" of a PPA depends mostly on 3 things:

1. Who made the PPA - An official PPA from WINE or LibreOffice like ppa:libreoffice/ppa and a PPA that I created myself are not the same. You do not know me as a PPA maintainer, so the trust issue and safety is VERY low for me (Since I could have made a corrupted package, incompatible package or anything else bad), but for LibreOffice and the PPA they offer in their website, THAT gives a certain safety net to it. So depending on who made the PPA, how long he or she has been making and maintaining the PPA will influence a little bit on how safe the PPA is for you. PPA's as mentioned above in the comments are not certified by Canonical.

2. How many users have used the PPA - For example, I have a PPA from http://winehq.org in my personal PPA. Would you trust ME with 10 users that confirm using my PPA having 6 of them saying it sucks than to the one Scott Ritchie offers as ppa:ubuntu-wine/ppa in the official winehq website. It has thousands of users (including me) that use his PPA and trust his work. This is work that has several years behind it.

3. How updated the PPA is - Let us say you are using Ubuntu 10.04 or 10.10, and you want to use THAT special PPA. You find out that the last update to that PPA was 20 years ago.. O.o. The chances you have on using THAT PPA are null. Why?. Because the package dependencies that PPA needs are very old and maybe the updated ones change so much code that they wont work with the PPA and possibly break your system if you install any of the packages of that PPA to your system.

How updated a PPA influences the decision to use it if he/she wants to use THAT PPA. If not they would rather go look for another one more up to date. You do not want Banshee 0.1 or Wine 0.0.0.1 or OpenOffice 0.1 Beta Alpha Omega Thundercat Edition with the latest Ubuntu. What you want is a PPA that is updated to your current Ubuntu. Remember that a PPA mentions for what Ubuntu version is made for or multiple Ubuntu versions was made for.

As an example of this here is an image of the versions that are supported in the Wine PPA:

enter image description here

Here you can see that this PPA is supported since Dinosaurs.

One BAD thing about how updated a PPA is, if the PPA maintainer tends to push into the PPA the latest, greatest and cutting edge version of a specific package. The down side of this is that if you are going to test the latest of something, you ARE going to find some bugs. Try to stick with PPAs that are updated to a stable version and not a unstable, testing or dev version since it might/will contain bugs. The idea of having the latest is also to TEST and say what problems were found and solve them. An example of this are the daily Xorg PPAs and Daily Mozilla PPAs. You will get about 3 daily updates for X.org or Firefox if you get the dailies. This is because of the work the put in there and if you are using their daily PPAs it means you want to help with bug hunting or development and NOT for a production environment.

Basically stick with this 3 and you will be safe. Always look for the maker/maintainer of the PPA. Always see if many users have used it and always see how updated the PPA is. Places like OMGUbuntu, Phoronix, Slashdot, The H, WebUp8 and even here in AskUbuntu are good sources to find many users and articles talking about and recommending some PPAs that they have tested.

Stable PPA Examples - LibreOffice, OpenOffice, Banshee, Wine, Kubuntu, Ubuntu, Xubuntu, PlayDeb, GetDeb, VLC are good and safe PPAs from MY experience.

Semi Stable PPA - X-Swat PPA is a in the middle PPA between bleeding edge and stable.

Bleeding Edge PPA - Xorg-Edgers is a bleeding edge PPA although I should mention that after 12.04, this PPA has become more and more stable. I would still mark it as bleeding edge but it is stable enough for end users.

Selectable PPA - Handbrake offers here a way for the user to choose, do you want a stable version or do you want the bleeding edge (Also referred to as Snapshot) version. In this case you can select what you want to use.

Note that in the case of using for example the X-Swat ppa with the Xorg-Edgers PPA, you will get a mixed between the two (With priority towards Xorg-Edgers). This is because both are trying to include almost the same packages, so they will overwrite each other and only the most updated one will show in your repositories (Except if you manually tell it to grab the package from X-Swat).

Some PPAs might update some of your packages when you add them to your repository because they will overwrite with their own version a certain package to make the PPA software work on your system correctly. This might be some code packages, python versions, etc.. Other like the LibreOffice PPA will remove all existence of the OpenOffice from your system to install the LibreOffice packages there. Basically read what other users have commented about a specific package and also read if the package is compatible with your Ubuntu version.

As the comment below suggest by Jeremy Bicha, some bleeding edge (PPAs that stay very up to date including adding Alpha, Beta or RC quality software in the PPA) could potentially damage your whole system (In the worst case). Jeremy mentions an example of many.


Answer by fossfreedom

To develop PPA's on launchpad, the contributor must have signed the ubuntu code of conduct. This signifies the the developer must abide to a minimum set of standards.

Usually people should then consult the ubuntuforums to see who has used particular ppa's and if they could cause any issues.

For a "novice" or "noob" - my best advice is to steer clear of PPA's until you feel confident that you understand a few things about the command line, potential error messages and a few things how to diagnose issues.

To remove ppa's causing issues, you can most of the time use "ppa_purge"

If you are feeling nervous, then consider an image backup of your computer with a tool like clonezilla. That way, if things go wrong and you cant resolve it, at least you have a quick means to restore your computer back to the way it was before you started playing.

Having said all that, ppa's are extremely useful to get the latest versions of software - especially for those that dont try to upgrade every 6 months and stick with the LTS version of ubuntu.


Answer by kelley

It isn't just a matter of malware, as has already been said. It is also that some of the software might really still be in the testing stage and not ready for production use. If you install it and rely upon it to get work done, you might find that it is buggy, unreliable, and can crash - leaving you without the work you have done.

Some of it might also not get along well with other aspects of Ubuntu, such as Unity or Gnome, causing problems that are difficult to trace, and perhaps even making your system unstable.

This is not because the software is bad, but because it has perhaps not yet been fully tested, or because it was made available so that people could test it, but not yet intended to be generally released as production software. So you should use caution, although some of it is really quite good.

A number of months ago I installed a recommended package from a particular PPA, and it trashed my system enough that I had to reinstall Ubuntu. I was a new user and didn't know what else to do; with a bit more knowledge I might have been able to solve the problem and restore it without doing a reinstall (although that, too, was useful to me in learning Ubuntu, but if I had worked saved on my machine I would have lost it).

So be careful, ask questions, make frequent backups (!!!), and know that malware is unlikely (though not impossible).


Tags: security (Next Q), ppa (Next Q)


Q: How to delete file(s) in secure manner?

Tags: security (Next Q)

Is there a way to make sure that a deleted file can not be recovered?

As you know, deleting a file by pressing shift-del or using trash doesn't mean that file is gone forever. It resides somewhere in the computer.

In day to day life, law enforcement or thieves use recovery programs to reveal our private data or something else.

They can recover all data that resides on hard disk, in RAM, and even USB.

To protect us from these people, what should one do? Which program should one use?

Note: I don't want an answer like first use a recovery program then erase that file with shred. With just one program, I want to erase all unused/deleted files on my computer.

Tags: security (Next Q)

User: user20296


Answer by takkat

Shred

This command line tool is already installed from core utilities in Ubuntu to securely erase and overwrite single files by using the Gutman method.

Fast shredding

shred -vzn 0 /dev/sdc1

erases whole partitions by overwriting everything with 0s in a single iteration. If no legal aspects require another procedure doing so is most probably safe to securely delete your private data Craig Wright Lecture Notes in Computer Science, 2008,5352, 243-257.

Secure shredding

shred -vzn 3 /dev/sdc1

erases the whole partition using 3 iterations with random numbers. In addition (option -z) writes zeros to hide the shredding process at the end. This will take 4 times longer than the fast method.

NOTE: By shredding a partition we will overwrite this partition with 0 or random numbers. It therefore efficiently deletes everything including file system caches on that partition forever. This can also be used to remove unwanted remnants of deleted files. Files we want to keep will have to be backed up before shredding.


Wipe Install wipe

More options and the possibility to erase directories in addition to a single file is offered by this command line utility.


Additional notes on journaling file systems and SSD:

· Please read the notes in the linked manpages on security issues arising from still recoverable backups in journaling file systems when erasing single files. Overwriting whole partitions rather than single files will effectively erase all data even when using a journaling file system.

· Erasing data on a solid state disk (SSD) can if at all only be done by overwriting the whole drive (not only single partitions) with several iterations. Some SSDs may have an inbuilt feature to erase data but this may not always be efficient (see this link from comment). At present there is no general recommendation on the wiping processes or number of erase iterations needed to securely remove all data remnants on all SSDs available.

These options can be added in the context menu of Nautilus and Thunar.

· In Thunar, open "Edit" then "Configure Custom Actions"

Add (the plus sign)

Name "Shred File"

Description whatever you like

Action "shred -u %f"

Similarly for wipe.

· For Nautilus see this question and those related

Select "Appearance Conditions" and select "Other Files"


Answer by flamsmark

There isn't one command that you can run which will easily clean up all the already-deleted files for you. However, there are a number of things you can do to reduce your vulnerability to this sort of attack in future.

As others have said, using tools like shred or srm allows you to delete a specific file by actually overwriting it, rather than just removing it from the filesystem. If you're feeling bold, you can replace the rm command with shred or srm to securely delete files going forward. That means that whenever you (or another program) tries to delete something using rm, the secure delete command will run instead.

However, if you're using a solid state disk, or even some newer mechanical disks, shred and other overwriting-based methods may not be effective, since the disk may not actually write where you think it's writing (source).


Full-Disk Encryption

A more convenient option is full-disk encryption. If you use the alternate installer, Ubuntu can automatically set up a fully-encrypted disk for you you, but you can also customize and configure the settings yourself. Once installed, the encryption is almost invisible to you: after you enter the passphrase (be sure to pick a good, long one) when the computer starts up, everything looks and feels just like normal Ubuntu.

You can also encrypt external media like USB drives using Ubuntu's Disk Utility. Setting up an encrypted external disk is as simple as checking the "encrypt underlying filesystem" box when formatting the disk. You can even store the passphrase on your (encrypted) keyring, so that you don't need to enter the phrase every time you plug that disk into your computer.

If your whole disk -- and all your removable media -- is encrypted, there's much less to worry about. A thief or police officer would need to swipe your computer while it's on, (or within a minute or two of turning it off if they're very good) in order to access your data. If you hibernate (rather than suspend) your computer when it's not in use, then you should be pretty safe.

If you ever need to completely destroy all your data, you don't need to do a Gutmann wipe of your whole disk. Simply overwrite the very beginning of the disk, to destroy the headers for the encrypted volume. Unlike with a regular filesystem, this will actually make it impossible to recover the data.


So, how do you go from your current setup to a safely encrypted disk? It's quite a challenge to retrofit a currently-installed operating system to use an encrypted disk. The easiest approach is to backup all your data and settings, then reinstall with an encrypted disk. When backing up, make sure to back up your data to an encrypted external drive, but don't save the passphrase in your keyring.

After you've backed everything up, you may want to aggressively wipe your hard drive, to make sure that none of your existing data can be recovered in the future. If you're using an SSD, the process is even more challenging, so depending how much you want to invest in the process, it might be worth destroying your current disk (a challenging proposition) and starting with a new one.

When reinstalling the OS, if you haven't aggressively wiped the disk already, you should make sure to completely fill the new encrypted partition, which will overwrite all your old data. Once you've restored your backup, you may want to aggressively wipe the start of the backup disk, to destroy the encryption header, so that it can't be recovered again.


Tags: security (Next Q)


Q: What is the best way to restrict access to adult content?

Tags: security (Next Q)

I bought my kids a PC and installed 12.04 (Unity) on it. The bottom line is, I want my children to use the computer unsupervised while I have confidence they cannot access anything inappropriate.

What I have looked at:

I was looking at Scrubit a tool which allows me configure my wifi router to block content and this solution would also protect my other PC and mobile devices. This may be overkill as I just want the solution to work on one PC.

I also did some Google searches and came across the application called Nanny (it seems to look the part). My experience of OSS is that the best solutions frequently never appear first on a Google search list and in this case I need to trust the methods therefore my question is very specific.

I want to leverage your knowledge and experience to understand What is the best way to restrict adult content on 12.04 LTS as this is important to me. It maybe a combination of things so please don't answer this question "try this or that", then give me some PPA unless you can share your experience of how good it is and of course if there are any constraints.

Tags: security (Next Q)

User: stephen-myall


Answer by bruno-pereira

One easy and great way of doing it is going straight to the source and use DNS filtering for that purpose.

Lets face it, you cannot protect your children all the time from all the internet risks, but using some sort of filtering you can at least force it in your home.

One of the advantages of filtering using DNS is that even if you have a smart ass kid that knows he can bypass blocking software installed in his system using the Ubuntu LiveCD, DNS usage can be enforced in the system that use DHCP via your router.

Have a look at the OpenDNS parental controls option for an example on how a DNS filter would work for your computers at home. Many other DNS servers will allow you to do it, OpenDNS is just one that I use normally.

Setup an account, make the changes in your router to use the DNS servers from OpenDNS, activate parental filtering and no matter what OS your kids are using that obtain an IP via the router will be blocked from accessing issue sites.

Of course if your kids are smart enough and know how to spoof that it will be easy for them to bypass it, that again, as said before, you wont be able to protect your children all the time from all the dangers the internet presents.


Answer by takkat

The Internet is not a safe place for kids.

We all know of content we don't want our kids to be exposed to, be it either accidentally or on purpose. We therefore have to do something about it. There are different approaches to gain some security but all fail when it comes to the details. Let me explain why:

· Whitelists
Whitelists securely block unwanted content and can be generated by a variety of browser plugins or parental control software but they will not last for long. We want our kids to discover the world, to learn how to operate the Internet, to learn how to find information, and to learn how to play games that are safe for them. They will not learn how to do this if they only have access to a small list of sites granted by Daddy where the next click on a button leads to the "BANNED" page. Only very young kids may be happy for some months with a whitelist.

· Blacklists
Blacklists such as offered e.g. by DNS services are meant to contain all known bad sites and block them. This task is ridiculous. We can not possibly know of all bad sites. They pop up everyday in thousands. The makers of Dansguardian put it like that:

The web is a fast changing place and even large web search engines such as Google or Altavista or Yahoo don't even know of half of it. This makes filtering by web address (URL) difficult as sites change and new ones come up all the time.

· Content Filters
To overcome limitations of a blacklist we additionally need some sort of content filtering as it is to my opinion best offered by dansguardian Install dansguardianthat also involves setting up a proxy based on Squid3. Still, its tedious to maintain the filters. This may be good for schools when a full time employee does all the work but it is unlikely we find the time at home to do so. Using pre-made filters is likely a bad idea as the bad sites know about them and avoid those suspicious phrases to appear on their pages (look at the spam mail you get and you have an idea on how they do it).

· Smart Kids
As already mentioned kids become smart, and we desperately want them to become so. Unortunately then they will also learn how to overcome most of our filters. They will (yes, they will, no matter what we do) gain access to anything they want by the time they are smart. All our blocking efforts will fail by then. Before that however they may not be interested in all that adult stuff at all. So there would in reality not be a need to block anything. By the time they are smart they need to be smart enough to know what is a bad site, and they should be smart enough to be interested in other things rather than visits on adult pages.

· Smart Parents
We all are or will be smart parents. We want to teach our kids how to operate Ubuntu and how to discover the Internet. Smart parents watch what their kids do and they talk to them about what they do. We should provide them with the knowledge on what to do when they come to sites that are different or may be harmful. We want our kids to tell us what they do. Smart parents spent the time they would need to set up and maintain a parental control rather with their kids. We also should not leave the kids alone. Put the computer to a place where you can pass by and have a short look what is displayed on the screen.

In summary the "best" way to refrain our kids from visiting unwanted sites is to become a smart parent educating their kids to become smart. But I know from my own experience that this also may fail, has inconsistencies, and may likely not achieve 100% safety. It still is the best.


Tags: security (Next Q)


Q: What is apparmor?

Tags: security (Next Q)

I hear a lot of talk about apparmor, I want to know the following:

· What is apparmor?

· How does apparmor work?

Tags: security (Next Q)

User: alvar


Answer by seth

What it is

Apparmor is a Mandatory Access Control (or MAC) system. It uses LSM kernel enhancements to restrict programs to certain resources. AppArmor does this with profiles loaded into the kernel when the system starts. Apparmor has two types of profile modes, enforcement and complain. Profiles in enforcement mode enforce that profile's rules and report violation attempts in syslog or auditd. Profiles in complain mode don't enforce any profile rules, just log violation attempts.

In Ubuntu Apparmor is installed by default. It confines applications to profiles to determine what files and permissions that a program needs access to. Some applications will come with their own properties and more can be found in the apparmor-profiles package.

You can install apparmor-profiles by running sudo apt-get install apparmor-profiles.

I found a good example of Apparmor on the Ubuntu forums that I rewrote for this post.

Apparmor is a security framework that prevents applications from turning evil. For example: If I run Firefox and visit a bad site that tries to install malware that will delete my home folder, Apparmor has limits on Firefox though preventing it from doing anything I don't want (like accessing my music, documents, etc). This way even if your application is compromised, no harm can be done.

How it works

The apparmor-utils package contains command line tools for configuring Apparmor. Using it you can change Apparmor's execution mode, find the status of a profile create new profiles, etc.

These are the most common commands:

NOTE:: Profiles are stored in /etc/apparmor.d/

· You can check Apparmor's status with sudo apparmor_status. You will get a list of all profiles * loaded, all profiles in enforce mode, all profiles in complain mode, what processes are defined in enforce/complain, etc.

· To put a profile in complain mode you use sudo aa-complain /path/to/bin, where /path/to/bin is the programs bin folder. For example, running: sudo aa-complain /usr/bin/firefox will put Firefox in complain mode.

· You use sudo aa-enforce /path/to/bin to enforce a programs profile.

· You can load all profiles into complain/enforce modes with sudo aa-complain /etc/apparmor.d/* and sudo aa-enforce.d/* respectively.

To load a profile into the kernel you would use apparmor_parser. You can reload profiles using the -r parameter.

· To load a profile use: cat /etc/apparmor.d/profile.name | sudo apparmor_parser -a, which effectively prints the contents of profile.name into Apparmor's parser.

· To reload a profile you use the -r parameter, like so: cat /etc/apparmor.d/profile.name | sudo apparmor_parser -r

· To reload all of Apparmor's profiles use: sudo service apparmor reload

To disable a profile you link it to /etc/apparmor.d/disable/ using ln like this: sudo ln -s /etc/apparmor.d/profile.name /etc/apparmor.d/disable/ then run: sudo apparmor_parser -R /etc/apparmor.d/profile.name.

NOTE: Do not confuse apparmor_parser -r with apparmor_parser -R THEY ARE NOT THE SAME THING!

· To re-enable a profile, remove the symbolic link to it in /etc/apparmor.d/disable/ then load it using the -a parameter. sudo rm /etc/apparmor.d/disable/profile.name cat /etc/apparmor.d/profile.name | sudo apparmor_parser -a

· You can disable Apparmor with sudo service apparmor stop and remove the kernel module using sudo update-rc.d -f apparmor defaults

· Start Apparmor with sudo service apparmor start and load kernel modules with sudo update-rc.d apparmor defaults

Profiles

Profiles are stored in /etc/apparmor.d/ and are named after the full path to the executable they profile, replacing '/' with '.'. For example /etc/apparmor.d/bin.ping is the profile for ping in /bin.

There are two main types of entries used in profiles:

1. Path Entries determine what files an application can access.

2. Capability entries determine what privileges a process can use.

Lets look at the profile for ping, located in etc/apparmor.d/bin.ping, as an example.

Skip code block

#include <tunables/global>

/bin/ping flags=(complain) {

#include <abstractions/base>

#include <abstractions/consoles>

#include <abstractions/nameservice>

capability net_raw,

capability setuid,

network inet raw,

/bin/ping mixr,

/etc/modules.conf r,

}

#include <tunables/global> Includes the file global in the directory tunables, this allows statements pertaining to multiple applications to be placed in a common file.

/bin/ping flags=(complain)sets the path to the profiled program and sets the mode to complain.

capability net_raw allows the application access to the CAP_NET_RAW Posix.1e capability.

/bin/ping mixr allows the application read and execute access to the file.

/etc/modules.conf r, The r gives the application read privileges for /etc/modules.conf

NOTE: After creating/editing a profile, you need to reload the profile for changes to take effect.

Here is a list of permissions you can use:

· r - read

· w - write

· ux - Unconstrained Execute

· Ux - Unconstrained Execute -- scrub the environment

· px - Discrete profile execute

· Px - Discrete profile execute -- scrub the environment

· ix - Inherit execute

· m - allow PROT_EXEC with mmap(2) calls

· l - link

SOURCES:

http://ubuntuforums.org/showthread.php?t=1606499

http://ubuntuforums.org/showthread.php?t=1008906

https://wiki.ubuntu.com/AppArmor

https://help.ubuntu.com/12.10/serverguide/apparmor.html


Tags: security (Next Q)


Q: How to patch the Heartbleed bug (CVE-2014-0160) in OpenSSL?

Tags: security

As of today, a bug in OpenSSL has been found affecting versions 1.0.1 through 1.0.1f (inclusive) and 1.0.2-beta.

Since Ubuntu 12.04, we are all vulnerable to this bug. In order to patch this vulnerability, affected users should update to OpenSSL 1.0.1g.

How can every affected user apply this update now?

Tags: security

User: lucio


Answer by florian-diesch

Security updates are available for 12.04, 12.10, 13.10 and 14.04 see Ubuntu Security Notice USN-2165-1.

So first you need to apply the available security updates, for example by running

sudo apt-get update

sudo apt-get upgrade

from the command line.

Do not forget to restart the services (HTTP, SMTP, etc.) that use the affected OpenSSL version, otherwise you are still vulnerable. See also Heartbleed: What is it and what are options to mitigate it? on Serverfault.com.

The following command shows (after an upgrade) all services that need to be restarted:

sudo find /proc -maxdepth 2 -name maps -exec grep -HE '/libssl\.so.* \(deleted\)' {} \; | cut -d/ -f3 | sort -u | xargs --no-run-if-empty ps uwwp

After that, you need to regenerate all server SSL keys, then evaluate whether your keys may have leaked, in which case attackers may have retrieved confidential information from your servers.


Answer by gilles

The bug is known as Heartbleed.

Am I vulnerable?

Generally, you're affected if you run some server that you generated an SSL key for at some point. Most end-users are not (directly) affected; at least Firefox and Chrome don't use OpenSSL. SSH is not affected. The distribution of Ubuntu packages isn't affected (it relies on GPG signatures).

You are vulnerable if you run any kind of server that uses OpenSSL versions 1.01.0.1f. The affected Ubuntu versions are 11.10 oneiric through 14.04 trusty pre-releases. It's an implementation bug, not a flaw in the protocol, so only programs that use the OpenSSL library are affected. If you have a program linked against the old 0.9.x version of OpenSSL, it isn't affected. Only programs that use the OpenSSL library to implement the SSL protocol are affected; programs that use OpenSSL for other things are not affected.

If you ran a vulnerable server exposed to the Internet, consider it compromised unless your logs show no connection since the announcement on 2014-04-07. (This assumes that the vulnerability wasn't exploited before its announcement.) If your server was only exposed internally, whether you need to change the keys will depend on what other security measures are in place.

What is the impact?

The bug allows any client who can connect to your SSL server to retrieve about 64kB of memory from the server. The client doesn't need to be authenticated in any way. By repeating the attack, the client can dump different parts of the memory in successive attempts.

One of the critical pieces of data that the attacker may be able to retrieve is the server's SSL private key. With this data, the attacker can impersonate your server.

How do I recover on a server?

1. Take all affected servers offline. As long as they're running, they're potentially leaking critical data.

2. Upgrade the libssl1.0.0 package, and make sure that all affected servers are restarted.
You can check if affected processes are still running with `grep 'libssl.*(deleted)' /proc/*/maps

3. Generate new keys. This is necessary because the bug might have allowed an attacker to obtain the old private key. Follow the same procedure you used initially.

o If you use certificates signed by a certification authority, submit your new public keys to your CA. When you get the new certificate, install it on your server.

o If you use self-signed certificates, install it on your server.

o Either way, move the old keys and certificates out of the way (but don't delete them, just ensure they aren't getting used any more).

4. Now that you have new uncompromised keys, you can bring your server back online.

5. Revoke the old certificates.

6. Damage assessment: any data that has been in the memory of a process serving SSL connections may potentially have been leaked. This can include user passwords and other confidential data. You need to evaluate what this data may have been.

o If you're running a service that allows password authentication, then the passwords of users who connected since a little before the vulnerability was announced should be considered compromised. (A little before, because the password may have remained unused in memory for a while.) Check your logs and change the passwords of any affected user.

o Also invalidate all session cookies, as they may have been compromised.

o Client certificates are not compromised.

o Any data that was exchanged since a little before the vulnerability may have remained in the memory of the server and so may have been leaked to an attacker.

o If someone has recorded an old SSL connection and retrieved your server's keys, they can now decrypt their transcript. (Unless PFS was ensured if you don't know, it wasn't.)

How do I recover on a client?

There are only few situations in which client applications are affected. The problem on the server side is that anyone can connect to a server and exploit the bug. In order to exploit a client, three conditions must be met:

· The client program used a buggy version the OpenSSL library to implement the SSL protocol.

· The client connected to a malicious server. (So for example, if you connected to an email provider, this isn't a concern.) This had to happen after the server owner became aware of the vulnerability, so presumably after 2014-04-07.

· The client process had confidential data in memory that wasn't shared with the server. (So if you just ran wget to download a file, there was no data to leak.)

If you did that between 2014-04-07 evening UTC and upgrading your OpenSSL library, consider any data that was in the client process's memory to be compromised.

References

· The Heartbleed Bug (by one of the two teams who independently discovered the bug)

· How exactly does the OpenSSL TLS heartbeat (Heartbleed) exploit work?

· Does Heartbleed mean new certificates for every SSL server?

· Heartbleed: What is it and what are options to mitigate it?


Answer by crimi

To see which OpenSSL version is installed on Ubuntu run:

dpkg -l | grep openssl

If you see the following version output, patch for CVE-2014-0160 should be included.

ii openssl 1.0.1-4ubuntu5.12 Secure Socket Layer (SSL)...

Looking at https://launchpad.net/ubuntu/+source/openssl/1.0.1-4ubuntu5.12, it shows which kind of bugs are fixed:

...

SECURITY UPDATE: memory disclosure in TLS heartbeat extension

- debian/patches/CVE-2014-0160.patch: use correct lengths in

ssl/d1_both.c, ssl/t1_lib.c.

- CVE-2014-0160

-- Marc Deslauriers <email address hidden> Mon, 07 Apr 2014 15:45:14 -0400

...


Tags: security


Q: How can I install just security updates from the command line?


Q: How to harden an SSH server?


Q: What is the difference between "gksudo nautilus" and "sudo nautilus"?


Q: Why is it bad to login as root?


Q: How to protect Ubuntu from fork bomb


Q: Are PPA's safe to add to my system and what are some "red flags" to watch out for?


Q: How to delete file(s) in secure manner?


Q: What is the best way to restrict access to adult content?


Q: How can I use a passcode generator for authentication for remote logins?


Q: How do I keep track of failed SSH log-in attempts?


Q: What is apparmor?


Q: How to patch the Heartbleed bug (CVE-2014-0160) in OpenSSL?