[ale] why I love windows

mike at trausch.us mike at trausch.us
Tue Jan 31 14:02:13 EST 2012


On 01/31/2012 01:03 PM, Jim Kinney wrote:
> I don't understand what the advantage is of totally blurring the line
> between user and admin is. You can right now set up your non-root
> account to do root-ish things with no further work other than typing the
> command.

I think we’re confusing technical and non-technical things in this
conversation... :-)

“System administrator” is both a job title and a role in a system.  For
the purposes of a job title, a system administrator should be able to do
(almost) anything and everything in a system, with whatever legal and
ethical constraints exist.  For the purposes of a role within the system
itself, a system administrator should be able to carry out the actions
that the job title confers and the system will (read: should be able to)
enforce any necessary restrictions.

Now, that said, I don’t see PK or any similar system as blurring any
lines:  A role is a role, and a user on a system must necessarily fill a
role or the account has no purpose for existing.

As an example, I am (on many systems) one of the few people who are
allowed to use the “sudo” command without restriction.  I find that
every time I have to (read: am ordered to or are otherwise required to
do so for practical, technical, organizational or other reasons)
delegate a task to someone that otherwise requires their ability to use
the “sudo” command in order to perform some sort of elevation, I have to
work very carefully to be sure that I do not accidentally confer any
additional privilege.

As an example, on one system, there is a person (who is a business
administrator and is on-site at all times) who has the authority to
issue a password reset for a user.  The password reset process is pretty
easy; I do it by hand all the time.  However, I don’t allow
non-technical people to have the ability to even use the “sudo” command.
 For this instance, what I did was I wrote a small C program that
performed the steps of the password reset itself, and then I set the
permissions on it to 0710, owned by “root:bizadmin”.  I then add the
required user to the bizadmin group and make the executable setuid.

This has the effect I want:  other than root, the only users who are
allowed to run the program are in the bizadmin group --- and they are
not allowed to read, copy or otherwise manipulate the file.  It is nice
and simple, and Just Works.

Is it the _most_ secure thing in the world?  No.  But it is sufficient
for the needs of the system in question, and makes it possible to
successfully delegate control of a specific thing to a specific subset
of people without giving them the ability to modify things willy-nilly.

I could also simply write and compile the program, make it 0700 and give
the bizadmin group the ability to run it by specifying “sudo <progname>”
exactly.  This would then require that they enter their own password
before being able to run the executable, but since this particular
person only ever enters the shell to run this executable that annoys
them.  <sigh>.

Now imagine that instead, there are precreated, trusted pieces of code
that I can call into over the system bus in order to do the same thing.
 I can instead simply modify policy:  the user would be able to run the
program, specify his password to authenticate (increased security over
my solution above) or not (based on some criteria that can be defined in
the policy, AIUI) and then run the trusted helper in (a) a different
address space and (b) with a different set of privileges.

Additionally, it is possible to securely get the credentials of the user
(UNIX sockets are great things) and be able to check some custom ACL
implementation for a complex application if that need be the case.
PostgreSQL being the (only!) exception that I am aware of, there is no
way to integrate software with SELinux in such a way as to be able to
compartmentalize application things at a granular level.  Using this, it
could be possible to have any arbitrarily complex thing in any system be
accessible by an arbitrary set of rules.

Actually, this is one thing that has long since annoyed me about UNIX
**in the context of today’s world**.  Not everything is a “file”.  The
system has no notion of records, and anything that the system doesn’t
provide has to be implemented in some way in the userspace unless the
people who want the functionality wind up modifying the kernel.  In this
instance, userspace control mechanisms that are separated by process or
network boundaries are far more useful than anything that the kernel
provides as an intrinsic primitive; the kernel simply cannot adequately
address complex application-level access control needs in many
scenarios, particularly those that involve databases.

> The hard separation exists for a reason. It's better to learn the tool
> chains available before embarking on a new project to reinvent the
> wheel. SELinux and AppArmour are very similar in concept but different
> in operation and practice. As you use Debian derivatives, learn
> AppArmour. If you use RedHat derivatives, learn SELinux.

There is still a hard separation.  The only real difference is in the
invocation and rules used to perform the processing.  If anything, I
would argue that systems like PolicyKit create a more well-defined
boundary between normal users and administrators, while satisfying
certain other requirements.

To carry on my example from above:  imagine that you are in the bizadmin
group, and let’s say that my program mentioned above didn’t check both
the credentials of the current user (the one initiating the password
reset) and the target user (the one whose password is being reset).  For
the record, my program does check the credentials, but let us assume for
this example that it does not.

One in the bizadmin group could reset the password of “root” (or “mbt”
:-)) on the system in order to elevate their own level of access.  This
would obviously be a Bad Thing.  The way that I understand SELinux, you
could grant a user the ability to reset passwords (e.g., modify
/etc/shadow) or not (e.g., not modify /etc/shadow).  But in the case of
PolicyKit, you could have either PK (I think) or the trusted backend
component (for sure) perform the necessary sanity checks in order to be
able to prevent this from happening.

That is, the only way that the trusted component is in danger is in the
event that there is an exploit that would allow an unprivileged user to
do one of the following:

  - Modify the trusted helper on disk and wait for it to die and
    respawn.

  - Send a SIGSTOP, modify the trusted helper’s memory, and then send
    a SIGCONT to the helper process.

Absent an exploit, there is nothing to prevent the system from working
as it should.

One step further would be to require a fully networked solution, use
Kerberos for authentication of identity and then use network boundaries
in much the same way.  Then you can actually do _more_ things because of
the complete separation of concerns, but that’s more overhead than most
networks require.  :)

> FYI: PolicyKit is a native part of RHEL. It's purpose is to handle the
> process that allows a user with proper privileges to do gui-fied
> root-ish things. It is tied in nicely with SELinux. My laptop runs in

Yes, this is (so far) the predominant use case of PK.  However, it is in
no way limited to such use.  It can be used from CLI programs (ncurses
or even standard line-by-line programs) as well.  Again, something like
IBM’s internal ISSI program is one example of a real-world program that
can make use of something like PK (and it has its own implementation of
something not dissimilar from it if memory serves).

> permissive mode. My servers run in targeted mode. That means apache can
> read/write ONLY apache directories (i.e. have the type
> httpd_sys_content_t. I can as admin make any area of the filesystem have
> that type and apache will be able to use that space. If I want to, I can
> dig way deep and allow suexec_httpd to use particular spaces only and
> not be able to write to /tmp or whatever. Targeted policy is pretty
> easy. MLS/MCS can be the total brain-bender :-) Picture the following:

I assume you can also create a type for areas that are read-only to the
Web server and so forth; but the thing is that I can (and do) do the
very same thing with POSIX ACLs; I'll grant access to the group that the
Web server in question is running as so that it is possible for it to do
whatever it needs.  Default ACLs also make it possible for newly created
files and directories to inherit ACL settings, making it possible for
the Web server to enable upload of content and still have ACLs on it to
protect it.

In order to get any better protections than that (e.g., only certain
users of the Web application are allowed to access certain content and
proof of identity must be given in order for access to be granted),
something else has to be used and there needs to be (at the very least)
a process boundary, though personally I would prefer a network boundary
in such a case.  Authentication to the HTTP server is almost always
something which is opaque to the operating system kernel.  Which means
that not even SELinux can help unless there is a Web application that is
specifically written to use it and change the credentials of the Web
server worker based on the user it is currently servicing.  That sounds
like 10 miles of bad road to me; the Web server should absolutely not
have that ability unless it can be proved that the server has zero bugs
in it, and that is simply not possible.  Since it isn’t possible, we
must make good use of both process and network boundaries in order to
mitigate whatever damage that any bugs that are present are capable of
causing.

> Each user has multiple level of security. Each level can "read down" and
> "write up" a security level. A process called "polyinstantiation" was
> created so that each user has multiple $HOME with different security
> levels. There is a /tmp for each level in use AND it's tied to each
> user. So it's a private /tmp that kernel space understands as normal
> /tmp when a user app calls for an IO to /tmp. Each security level
> transition requires a login. The entire chain of logins is tracked back
> to the originating login. So a user can't use a local exploit to become
> root and then do anything because the system knows the transition path.

This sounds like a feature that makes use of the kernel’s support for
differing filesystem namespaces.  If that's the case, doesn’t that mean
that a process is stuck with its namespace once it is spawned?

> Now add in MCS to further subdivide the system and processes into
> compartments that can each have multiple levels. So Fred works on two
> projects and at different levels on each. Each category can (and usually
> does) require a complete login process (not su) so that
> polyinstantiation wakes up and does it's job at each category and level.

I’m not sure that I actually follow your example.  How would the same
login result in a different instance?  And why can Fred not quickly
switch between both projects on which he works?

That said, honestly, this sounds like something to be enforced across a
network boundary.  This actually sounds like we are shoehorning things
into files where no files need be (directly) used.  For example, I use a
network boundary method to enforce ACLs on git repositories that I work
on, where some repositories are available to multiple users and with
varying granularity of permissions.  For project management I use a Web
application that has its own permissions system.

Oh, and kind of off-topic, this reminds me; I recently found an
accounting system that claims to have access controls but they don’t
work!  I suppose that if you could find a way to make it work with
something like SELinux that would not be allowed to happen, but I don’t
really think that applications are likely to actually take advantage of
something like SELinux to dynamically update the ruleset to fit the
changing requirements of a self-dynamic system.  I could be wrong, of
course, since I’m nowhere close to intimately familiar with SELinux in
particular.  AFAIK this isn’t something that is done with AppArmor
either; Ubuntu makes it the responsibility of packages to install
profiles for use with AppArmor, for example.

> Once you know how to read the audit logs, you can track a user through
> what is done. By tricks such as dual logs and append-only partitions, a
> cracker has nearly no chance to both "do bad things" AND cover the tracks.

Again that sounds like a place where a network boundary is “the tool”.
A single system (or cluster) should be in charge of receiving audit logs
from the systems in its environment and that should ideally be also
shipped off-site in real-time or near-real-time in order to be able to
be provably append-only.

> I'll start working on the SELinux roadshow and holler when it's ready.

+eleventyone! :-)

	--- Mike

-- 
A man who reasons deliberately, manages it better after studying Logic
than he could before, if he is sincere about it and has common sense.
                                   --- Carveth Read, “Logic”

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 729 bytes
Desc: OpenPGP digital signature
Url : http://mail.ale.org/pipermail/ale/attachments/20120131/767f5145/attachment.bin 


More information about the Ale mailing list