On Tue, Sep 6, 2022 at 8:31 PM Casey Schaufler <casey(a)schaufler-ca.com> wrote:
On 9/6/2022 4:24 PM, Paul Moore wrote:
> On Fri, Sep 2, 2022 at 7:14 PM Casey Schaufler <casey(a)schaufler-ca.com>
wrote:
>> On 9/2/2022 2:30 PM, Paul Moore wrote:
>>> On Tue, Aug 2, 2022 at 8:56 PM Paul Moore <paul(a)paul-moore.com>
wrote:
>>>> On Tue, Aug 2, 2022 at 8:01 PM Casey Schaufler
<casey(a)schaufler-ca.com> wrote:
>>>>> I would like very much to get v38 or v39 of the LSM stacking for
Apparmor
>>>>> patch set in the LSM next branch for 6.1. The audit changes have
polished
>>>>> up nicely and I believe that all comments on the integrity code have
been
>>>>> addressed. The interface_lsm mechanism has been beaten to a frothy
peak.
>>>>> There are serious binder changes, but I think they address issues
beyond
>>>>> the needs of stacking. Changes outside these areas are pretty well
limited
>>>>> to LSM interface improvements.
>>>> The LSM stacking patches are near the very top of my list to review
>>>> once the merge window clears, the io_uring fixes are in (bug fix), and
>>>> SCTP is somewhat sane again (bug fix). I'm hopeful that the
io_uring
>>>> and SCTP stuff can be finished up in the next week or two.
>>>>
>>>> Since I'm the designated first stuckee now for the stacking stuff I
>>>> want to go back through everything with fresh eyes, which probably
>>>> isn't a bad idea since it has been a while since I looked at the
full
>>>> patchset from bottom to top. I can tell you that I've never been
>>>> really excited about the /proc changes, and believe it or not I've
>>>> been thinking about those a fair amount since James asked me to start
>>>> maintaining the LSM. I don't want to get into any detail until
I've
>>>> had a chance to look over everything again, but just a heads-up that
>>>> I'm not too excited about those bits.
>>> As I mentioned above, I don't really like the stuff that one has to do
>>> to support LSM stacking on the existing /proc interfaces, the
>>> "label1\0label2\labelN\0" hack is probably the best (only?) option
we
>>> have for retrofitting multiple LSMs into those interfaces and I think
>>> we can all agree it's not a great API. Considering that applications
>>> that wish to become simultaneous multi-LSM aware are going to need
>>> modification anyway, let's take a step back and see if we can do this
>>> with a more sensible API.
>> This is a compound problem. Some applications, including systemd and dbus,
>> will require modification to completely support multiple concurrent LSMs
>> in the long term. This will certainly be the case should someone be wild
>> and crazy enough to use Smack and SELinux together. Even with the (Smack or
>> SELinux) and AppArmor case the ps(1) command should be educated about the
>> possibility of multiple "current" values. However, in a container
world,
>> where an Android container can run on an Ubuntu system, the presence of
>> AppArmor on the base system is completely uninteresting to the SELinux
>> aware applications in the container. This is a real use case.
> If you are running AppArmor on the host system and SELinux in a
> container you are likely going to have some *very* bizarre behavior as
> the SELinux policy you load in the container will apply to the entire
> system, including processes which started *before* the SELinux policy
> was loaded. While I understand the point you are trying to make, I
> don't believe the example you chose is going to work without a lot of
> other changes.
I don't use it myself, but I know it's frighteningly popular.
All right, I'm going to call your bluff here - who are these people
running AppArmor on the host and SELinux in a container? What policy
are they using, it's surely not an unmodified Fedora/RHEL or upstream
refpol policy? Do they run in enforcing mode without massive
permissions granted to kernel_t (I'm guessing all of the host
applications would appear as kernel_t)? How do you handle multiple
SELinux containers?
I'm aware of *one* use case where SELinux is run in a container and
that required a number of careful constraints on the use case and a
good deal of hacking to enable. I'm sure there are definitely people
that *want* this, especially in the context of Ubuntu, but I really
doubt this is in widespread use today.
>>> I think it's time to think about a proper set of LSM
syscalls.
>> At the very least we need a liblsm that preforms a number of useful
>> functions, like identifying what security modules are available,
>> validating "contexts", fetching "contexts" from files and
processes
>> and that sort of thing. Whether it is built on syscalls or /proc and
>> getxattr() is a matter of debate and taste.
> Why is it a forgone conclusion that a library would be necessary for
> basic operations? If the kernel/userspace API is sane to begin with
> we could probably either significantly reduce or eliminate the need
> for additional libraries.
I'm using my experience with the "hideous" context format
( "apparmor\0unconfined\0smack\0User\0" ) as a guide. Creating
a "sane" API for returning multiple lsm/context pairs is going
to be tricky. No one wants to require iterative calls to get a
collection of values for example. I've spent the past few years
trying to pound out APIs that are somewhat sane. I don't want to
spend another few years repeating the process for kernel APIs.
See my earlier comment to John. I care a lot about getting things
right, and very little about how long it takes. I'm sympathetic about
the time and difficulty involved, but I see that as no reason to merge
a not-good design.
>> The addition of multiple subject labels to audit would be
the same regardless
>> of /proc or syscall interfaces.
>
> Yes, that's why I didn't bring up audit as it doesn't weigh on this
> decision. If you really want to include audit for some reason, I'll
> simply remind you that I pushed back hard on overloading the existing
> subj/obj fields with a multiplexed label format, asking for individual
> subj/obj fields for each LSM.
Just pointing out that the stacking patches aren't that complicated.
Ha! Let's just agree to disagree on that point :)
--
paul-moore.com