Simo's Blog

"The other Crypto"

HOME

OpenSSL 3.0 Providers and PKCS#11

After a long hiatus I am back with a new blog post.

What triggered it is that I started a new project because I wanted to explore two things I have been putting off for a while, and I had some time on my hands on a long weekend.

What's interesting about this project is that, on paper, it is straightforward, we just wire up one API to another, and given both deal with simple cryptography primitives it should be pretty simple... or is it?

Well of course it isn't, first of all we are talking about cryptography, which is notoriously finicky in terms of API for various legitimate, and less so, reasons. But the actual calls that need to be made to implement the cryptography are the least problematic, for the most part. What really makes things hard are the lack of documentation about OpenSSL providers, and the impedance mismatch between the way OpenSSL goes about handling some of the operation and the way PKCS#11 envisions applications should handle the cryptography engine.

So the task of pairing two APIs is compounded by the need to decide what compromise can be reasonable when the semantics differ.

A small rant on OpenSSL internals

Of course this is possible only after you go through digging into OpenSSL's source code to try to figure out the semantics in the first place. I have to admit that OpenSSL's internals are quite convoluted and baffling at times, and more than once I felt like the architecture of the code was unnecessarily complicated and obscure/obtuse. Of course I understand that there is a hefty dose of legacy code that forced some of these contortions but still...

The code is really hard to follow for a few reasons:

- A lot of code is generated at build time through a nest of macros.
This means that some of the tools I use to automatically navigate the code are neutralized because they can't preprocess macros, forcing to resort to clever grepping of partial names to try and find the place where these macros may be, then mentally reconstruct each time what might be generated to figure out the next function called to look at.
I could use stuff like gcc -E to get and index the preprocessed output before compilation, but it is not as easy to do and requires battling the build system.

- A ton of indirection and jump tables are used all over the code.
The OpenSSL code is excessively dynamic, at runtime there are dozens and dozens of places where the code is "pluggable" and several different cryptography primitives can be called from one API and then multiplexed internally based on some identifier passed from the application. This makes the API easier to use from applications, but makes it hard to follow what is being called next at any given time.

The lack of very obvious naming standards, the reuse of very simple words like "sign/encrypt" as elements of these tables and several layers of indirection introduced by providers and other layers that deal with legacy APIs makes it impossible to read the code linearly, from point of entry to the execution of the actual primitives. In order to understand with any degree of confidence what is going on under the hood you need to keep in your head a lot of knowledge of how the internal works. This is beyond the ability of my brain, so I often resort to using GDB and some strategically placed break points. Unfortunately just using gdb and stepping through is also not a viable way to explore the code because the abstraction around internal name/provider and therefore function routing/data caching resolution is absolutely impenetrable.

A Small rant on PKCS#11

Ok enough about OpenSSL, let's look into the PKCS#11 API.

At a first glance I have to say that the PKCS#11 API is pretty straightforward, you just call an initial entry point after dlopen() of the driver you want to use, get a function table and then each cryptography primitive that the standard support is called through reasonable well though and minimal abstractions.

What are the issues surrounding PKCS#11 then? It's more of an ecosystem issues in this case. The PKCS#11 API has gone through various revisions, so you have to deal with tokens that may be stuck on an older version (and therefore support less stuff, but if that was the only issue it would be easy to solve, you just write 2/3 variants per PKCS#11 version and you are done ... not so fast!

One of the issues with PKCS#11 is that historically it was not prescriptive enough, tokens can decide arbitrarily which kind of functions they support, so you have to be prepare to deal at runtime with missing functionality. This may force adding fallback code to handle functions that OpenSSL assumes are provided by a single provider facility.

Another issue is that although the spec is quite big and detail, it is, at the same time, somewhat under specified when it comes to some of the details. For example trying to figure out what exact formatting is needed for an attribute like CKA_EC_PARAMS (for ECDSA signatures ) is not trivial due to use of a lot of ASN.1 in DER format and OIDs.

Then there are the many and obviously bogus drivers, those that stuck to the letter of the spec to avoid coding difficult stuff. One notable example I will not mention by name I looked at recently is as bare bone as you can possibly imagine and be somewhat spec compliant. A bit depressing.

The problem with low quality drivers is that you need to account for quirks, and add more code to handle stuff that can be made to work but not quite the correct way you should be doing it.

Conclusions

Although I like ranting, I have to say I am enjoying writing this code, just like the old Samba times, you have to go and discover what actually works, what other engineers came up and actually did behind APIs. Discovering the actual semantics, and sometimes using them against the original Kung Fu style is fun.

The main goal of this project is to make Hardware Tokens really accessible to applications. Unlike the old "enignes" APIs in OpenSSL, where applications had to be explicitly coded to work differently in the presence of an external cryptography module, the provider's API is basically hidden within OpenSSL's core and "transparent" to applications.

code written against the modern OpenSSL v3 facilities that uses URIs to reference keys (a file name is considered a URI) through the store API could use a PKCS#11 module without any code difference, by simply replacing the pem file path with a pkcs11: URI.

Of course most applications are written against a mix of old and clunky OpenSSL APIs that have not been fully deprecated yet, but given the changes we see at the horizon, with the advent of PQC algorithms, I think we have a chance to see a lot of applications changing over to the new OpenSSL APIs which will be the only ones to offer access to these new algorithms.

Fun times ahead

Distributing Secrets with Custodia

My last blog post described a crypto library I created named JWCrypto. I've built this library as a building block of Custodia, a Service that helps sharing Secrets, Keys, Passwords in distributed applications like micro service architectures built on containers.

Custodia is itself a building block of a new FreeIPA feature to improve the experience of setting up replicas. In fact Custodia at the moment is mostly plumbing for this feature, and although the plumbing is all there, it is not very usable outside of the FreeIPA project without some thinkering.

The past week I was at Flock where I gave a presentation on the problem of distributing Secrets Securely, which is based on my work and my thinking about the general problem and how I applied that thinking to build a generic service which I then specializes for use by FreeIPA. If you are curious, I have posted the slides I used during my talk, and they assure me soon there will soon be video recordings of all the talks available online.

JWCrypto a python module to do crypto using JSON

Lately I had the need to do use some crypto in a web-like scenario, a.k.a over-HTTP(S) so I set out to look at what could be used.

Pretty quickly it came clear that the JSON Web Encryption standard proposed in the IETF JOSE Working Group would be a good fit and actually the JSON Web Signature would come useful too.

Once I was convinced this was the standard to use I tried to find out a python module that implemented it as the project I am going to use this stuff in (FreeIPA ultimately) is python based.

The only implementation I found initially (since then I've found other projects scattered over the web) was this Jose project on GitHub.

After a quick look I was not satisfied by three things:

While the first was not a big problem as I could simply contribute the missing parts, the second is, and the third is a big minus too. I wanted to use the new Python Cryptography library as it has proper interfaces and support for modern crypto, and neatly abstracts away the underlying crypto-library bindings.

So after some looking over the specs in details to see how much work it would entail I decided to build a python modules to implement all relevant specs myself.

The JWCrypto project is the result of a few weeks of work, complete of Documentation hosted by ReadTheDocs.

It is an almost complete implementation of the JWK, JWE, JWS and JWT specs and implements most of the algorithms defined in the JWA spec. It has been reviewed internally by a member of the Red Hat Security Team and has an extensive test suite based on the specs and the test vectors included in the JOSE WG Cookbook. It is also both Python2.7 and Python3.3 compatible!

I had a lot of fun implementing it, so if you find it useful feel free to drop me a note.

On Load Balancers and Kerberos

I've recently witnessed a lot of discussions around using load balancers and FreeIPA on the user's mailing list, and I realized there is a lot of confusion around how to use load balancers when Kerberos is used for authentication.

One of the issues is that Kerberos depends on accurate naming as server names are used to build the Service Principal Name (SPN) used to request tickets from a KDC.

When people introduce a load balancer on a network they usually assign it a new name which is used to redirect all clients to a single box that redirects traffic to multiple hosts behind the balancer.

From a transport point of view this is just fine, the box just handles packets. But from the client point of view all servers now look alike (same name). They have, intentionally, no idea what server they are going to hit.

This is the crux of the problem. When a client wants to authenticate using Kerberos it needs to ask the KDC for a ticket for a specific SPN. The only name available in this case is that of the load balancer, so that names is used to request a ticket.

For example, if we have three HTTP servers in a domain: uno.ipa.dom, due.ipa.dom, tre.ipa.dom; and for some reason we want to load balance them using the name all.ipa.dom then all a client can do is to go to the KDC and ask for a ticket for the SPN named: HTTP/all.ipa.dom@IPA.DOM

Now, once the client actually connect to that IP address and gets redirected to one of the servers by the load balancer, say uno.ipa.dom it will present this server a ticket that can be utilized only if the server has the key for the SPN named HTTP/all.ipa.dom@IPA.DOM

There are a few ways to satisfy this condition depending on what a KDC supports and what is the use case.

Use only one common Service Principal Name

One of the solutions is to create a new Service Principal in the KDC for the name HTTP/all.ipa.dom@IPA.DOM then generate a keytab and distribute it to all servers. The servers will use no other key, and they will identify themselves with the common name, so if a client tries to contact them using their individual name, then authentication will fail, as the KDC will not have a principal for the other names and the services themselves are not configure to use their hostname only the common name.

Use one key and multiple SPNs

A slightly friendlier way is to assign aliases to a single principal name, so that clients can contact the servers both with the common name and directly using the server's individual names. This is possible if the KDC can create aliases to the canonical principal name. The SPNs HTTP/uno.ipa.dom, HTTP/due.ipa.dom, HTTP/tre.ipa.dom are created as aliases of HTTP/all.ipa.dom, so when a client asks for a ticket for any of these names the same key is used to generate it.

Use multiple keys, one per name

Another way again is to assign servers multiple keys. For example the server named uno.ipa.dom will be given a keytab with keys for both HTTP/uno.ipa.dom@IPA.DOM and HTTP/all.ipa.dom@IPA.DOM, so that regardless of how the client tries to access it, the KDC will return a ticket using a key the service has access to.

It is important to note that the acceptor, in this case, must not be configured to use a specific SPN or acquire specific credentials before trying to accept a connection if using GSSAPI, otherwise the wrong key may be selected from the keytab and context establishment may fail. If no name is specified then GSSAPI can try all keys in the keytab until one succeeds in decrypting the ticket.

Proxying authentication

One last option is to actually terminate the connection on a single server which then proxies out to the backend servers. In this case only the proxy has a keytab and the backend servers trust the proxy to set appropriate headers to identify the authenticated client principal, or set a shared session cookie that all servers have access to. In this case clients are forbidden from getting access to the backend server directly by firewalling or similar network level segregation.

Choosing a solution

Choosing which option is right depends on many factors, for example, if (some) clients need to be able to authenticate directly to the backend servers using their individual names, then using only one name only like in the first and fourth options is clearly not possible. Using or not aliases may or not be possible depending on whether the KDC in use supports them.

More complex cases, the FreeIPA Web UI

The FreeIPA Web UI adds more complexity to the aforementioned cases. The Web UI is just a frontend to the underlying LDAP database and relies on constrained delegation to access the LDAP server, so that access control is applied by the LDAP server using the correct user credentials.

The way constrained delegation is implemented requires the server to obtain a TGT using the server keytab. What this means is that only one Service Principal Name can be used in the FreeIPA HTTP server and that name is determined before the client connects. This factor makes it particularly difficult for FreeIPA servers to be load balanced. For the HTTP server the FreeIPA master could theoretically be manually reconfigured to use a single common name and share a keytab, this would allow clients to connect to any FreeIPA server and perform constrained delegation using the common name, however admins wouldn't be able to connect to a specific server and change local settings. Moreover, internal operations and updates may or may not work going forward.

In short, I wouldn't recommend it until the FreeIPA project provides a way to officially access the Web UI using aliases.

A poor man solution if you want to offer a single name for ease of access and some sort of load balancing could be to stand up a server at the common name and a CGI script that redirects clients randomly to one of the IPA servers.

PSA - Smart Cards are still a Hell - Instructions for CardOS cards

Some time ago I received a Smart Card from work in order to do some testing. Of course as soon as I received it I got drowned into some other work and had to postpone playing with it. Come the winter holiday break and I found some time to try this new toy. Except ...

... except I found out that the Smart Card Hell is still a Hell

I tried to find information online about how to initialize the CardOS card I got and I found very little cohesive documentation even on the sites of the tools I ultimately got to use.

The smart card landscape is still a fragmented lake of incompatibility, where the same tools work for some functions on some cards and lack in any way usability.

Ultimately I couldn't find out the right magic incantation for the reader and card combo I had, and instead had to ask a coworker that already used this stuff.

Luckily he had the magic scroll and it allowed me, at least, to start playing with the card. So for posterity, and for my own sake, let me register here the few steps needed to install a certificate in this setup.

I had to use no less than 3 different CLI tools to manage the job, which is insane in its own right. The tools as you will see have absurd requirements like sometimes specifying a shared object name on the CLI ... I think smart card tools still win the "Unusable jumbled mess of tools - 2013 award".

The cardos-tool --info command let me know that I have a SCM Microsystems Inc. SCR 3310 Reader using a CardOS V4.3B card. Of course you need to know in advance that your card is a CardOS one to be able to find out the tool to use ...

The very Lucky thing about this card is that if can be reformatted to pristine status w/o knowing any PIN or PUK. Of course that means someone can wipe it out, but that is not a big deal in production (someone can always lock it dead by failing enough time to enter PIN and PUK codes), but it is great for developers that keep forgetting whatever test PIN or PUK code was used with the specific card :-) This way the worst case is that you just need to format and generate/install a new cert to keep testing.

So on to the instructions:

Format the card:

cardos-tool -f
and notice how no confirmation at all is requested, and it works as a user on my Fedora 20 machine. I find not asking for confirmation a bit bold, given this operation destroys all current content, but ... whatever ...

Create necessary PKCS#15 and set admin pins:

pkcs15-init -CT --so-pin 12345678 --so-puk 23456789
note, that you have to know that you need to create this stuff and that a tool with obscure switches to do it also exists ...

Separately create user PIN and unlock code:

pkcs15-init -P -a 1 --pin 87654321 --puk 98765432 --so-pin 12345678 --label "My Cert"
No idea why this needs to be a separate operation, part of the magic scroll.

Finally import an existing certificate:

pkcs15-init --store-private-key /path/to/file.cert --auth-id 01 --pin 87654321 --so-pin 12345678
again not sure why a separate command, also note that this assumes a PEM formatted file, if you have a pkcs12 file use the --format pkcs12 switch to feed it into. Note that the tool assumes pkcs12 cert files are passphrase protected so you need to know the code before trying to upload such formatted certs ion the card.

Check everything went well with:

pkcs11-tool --module opensc-pkcs11.so -l --pin 87654321 -O
of course yet another tool, with the most amusing syntax of them all ...

... and that is all I know at this point. If you feel the need to weep at this point feel free, I am reserving a corner of my room to do just that later on after lunch ...

GSS-NTLMSSP a new GSSAPI Mechanism

Without fanfare here is my latest wandering in the creation of obscure and complicated security infrastructure software: GSS-NTLMSSP.

NTLM is Microsoft's first effort at creating a secure authentication method that wouldn't rely on exposing the user password to the target service and instead used a Challenge Response mechanism to create proof of knowledge of a shared secret between the client and the server.

During the years Microsoft has slighlty improved the protocol and later on when they finally created the SSPI subsystem in Windows they created the NTLMSSP mechanism that incapsulated all NTLM usages.

Micosoft's SSPI is the Windows equivalent (and wire-compatible) version of GSSAPI and I've been wanting to build this mechanism since MIT Kerberos added directly supoport for the SPNEGO negotiation mechanism.

The current code is still young and many things are missing, notably the ability to use Domain Controller based authentication for the server side. However I find it is a quite useful module for clients, so here we have our first shiny release: 0.1.0.

Feel free to try and use it and let me know if you have neat ideas to improve its use and usability.

About Kerberos Principals and Keys

I find time and again people find the concept of principals is a confusing unless they are very familiar with Kerberos.

I see the same issues when discussing about keys and keytabs.

So what is a Kerberos Principal ?

The simplest, initial, answer can be that a principal is the analogous of a user name in a multiuser OS. So why do we call it principal ? And why do you hear variations like 'User Principal' or 'Service Principal' ?

The reason why the term principal is used is because 'user' is indeed insufficient, too generic and misleading. In Kerberos there are many actors that need keys, any actor that need a key needs to be represented by an identifier. These identifiers are compounded strings called 'principals'.

Anatomy of a principal

A principal is a set of components represented by strings. One very important component is the realm name, each principal is always fully qualified with the name of the realm, The realm is represented by the last component in the string form. It is placed after an @ sign and is conventionally all upper case. The first part of the principal, instead, represents a specific identity within the realm. The first part can be split in multiple components joined by a / character.

Example:

component1 / component2 @ REALM

The simplest principals are actually what we think of users, generally actual people. The simplest identifier to represent users uses just one component and the realm. For example, the principal simo@EXAMPLE.COM represents a user named 'simo' that belongs to a realm named EXAMPLE.COM

The component is what we think of a user name, pretty simple so far. The realm as you can see resembles a domain name. That is on purpose as normally Kerberos realms are tied to DNS domain names, although not strictly required by the protocol specifications. Some implementations of Kerberos like Active Directory makes this a requirement. In AD the realm name is always the (DNS) domain name.

Another set of extremely important principals are the so called Service Principals. These principals represent actual programs or computers. Their form normally comprises two components, a service part and a fully qualified hostname.

Example:

nfs/server.example.com@EXAMPLE.COM

Let's analyze this principal name. The first component represents the service being used, in this case 'nfs' is used to represent a NFS server. Other well know service types are 'HTTP', 'DNS', 'host', 'cifs', etc... The second component is a DNS name. This is the server's own name. The realm specifies that this service is bound to the EXAMPLE.COM realm.

Why this specific convention was chosen to represent a specific NFS Server ?

The reason is that the Kerberos protocol does not offer a name resolution service. So a convention was devised to make it easy for a client to automatically compute what is the principal name of a target service they want to contact based on 2 easy to know names: the service type, and the name of the host that is offering it. This is necessary because this name is used by the client to contact the KDC and ask for a ticket for that specific service. If the client doesn't know the specific name of the target service, it cannot ask for a ticket.

The service type is easy to know, an NFS client is used to connect to an NFS server and the type can simply be hard coded to 'nfs', or can be set into a configuration file quite easily, it will be the same for all services of that type across the network.

The host name is also generally a well known name. When a user wants to connect to a specific server it has to identify it somehow to the NFS client, and that usually means giving the mount utility a server 'name'. Same for HTTP, you have to give the browser a server name to contact as part of a URL and so on. The only limitation, in case of Kerberos is that you need the canonical form upfront (although there is work to relax this requirement). DNS is often used to find out the canonical form from a shorter name or sometimes even an IP address (but see this post about reverse resolution).

The question at this point is why do we need principals to represent services or whole hosts ? The answer is that each service you want to contact needs keys in order to decrypt the tickets you present them to authenticate yourself.

Keys and Keytabs

Each principal is associated to a specific key in the KDC and this key is used to encrypt the tickets given to the clients. A service needs the same key in order to decrypt tickets; this is why Kerberos is called a shared key system. Any actor in a Kerberos system has a key that is also known to the KDC and is used to authenticate messages sent to the KDC or received from the KDC (a ticket can be considered a message received indirectly from the KDC where the KDC asserts the identity of the client.)

For user principals the key is the user's password. The KDC stores a copy of the password (generally transformed from the clear text into a more cryptographically useful secret through a key derivation process, but nonetheless perfectly equivalent to the user password).

For service principals, generally, instead of using a password a random key is generated and stored both in the KDC and in a file called a 'Keytab'.

A keytab file contains keys for a specific service, it is completely equivalent to a password file and needs to be treated as a highly sensitive secret.

Possession of the keytab means ability to fully impersonate the principal whose keys are stored in the keytab file. This means a keytab file should never be transmitted over a network in the clear (no emailing of keytabs please) and should be protected by appropriate access control (file permissions) at all times; a common mistake is to create a file in /tmp that is readable by anyone and only then move it somewhere else more secure.

Users can also use keytabs, a password can always be transformed into a keytab (using the same key derivation process that the KDC uses to store its copy), but that is less common because any password change will require to create a new keytab with the new keys.

Using and mapping principals

One of the things that people seem not to realize when they are first shown principals is that any principal can be used as a client to contact any service (this is not always true in AD as sometimes Service Principals are not allowed to request a TGT, but this is a configuration decision and is not always true).

This means that when accepting connections authenticated via Kerberos, applications need to pay a little bit of attention to who the client is. And need to perform some basic access control on the client principal before allowing access.

A common mistake is to take the principal name in string form and simply cut anything after the @ sign (the realm name) and use the remaining part as a 'user name' on the system, then perform calls like getpwnam() with this user name and grant the client the same access this user has on the system. Another even worse mistake is to allow ANY client that could properly authenticate to access data as if it were 'trusted' somehow.

On the one hand this may not be sufficient, and on the other this may be dangerously broad and an actual security issue.

First of all, as we said above, any principal may try contact a service, not just users that have a 1-1 corresponding name in the system. A NFS Server may act as a client and use it's key to contact another service. For example a web application needs to decide whether it wants to grant access to a client named nfs/nfsserver.example.com@EXAMPLE.COM just like it gives access to joe@EXAMPLE.COM or not.

There are also more exotic principals that may contact a service though, not just principals that are somehow directly trusted by our own KDC. For example anonymous principals and principals coming from a trusted realm.

Anonymous principals are quite an obscure and little used feature, often it is not possible to get tickets anonymously in a Kerberos realm, but they are allowed by some implementation, and if the KDC is configured to allow anonymous principals then applications need to be careful not to give these clients the same access they give to fully identified clients.

The anonymous principal is:

WELLKNOWN/ANONYMOUS@WELLKNOWN:ANONYMOUS

As can be seen the realm is actually a specially named realm and this is to avoid legacy apps that match the REALM before allowing access to be fooled in to thinking this is a fully trusted user. This is one reason why simply blindly chopping off anything after the @ character is a grave mistake.

Another reason why the realm part must be validated are Kerberos cross-realm trusts. A Kerberos realm can be configured to 'trust' another Kerberos realm. Meaning the principals of Realm A are allowed to get tickets for services in Realm B if there is a trust relationship between B and A.

For our application this means that it may be contacted by both the user joe@REALM.A and a different user joe@REALM.B that has nothing to do with the previous one. If our application simply cuts off the realm part, without checking that the realm matches something it understand, it may give access to data of a user in one realm to an homonym in another realm.

In general applications that do not want to deal with multiple realms should define one realm as allowed and refuse access to any principal that comes from a different realm. If multiple realms need to be supported (and that is a good idea) then appropriate mapping from the principal to an application identifier should be performed, by either using the full principal name as identifier, or by asking the system to map the principal for us including telling whether the principal is acceptable. This can be done in GSSAPI by using the gss_localname() function, which respects the auth_to_local configuration documented in the krb5.conf(5) manpage.

Using the system provided configuration allows admins to configure rules only once for the whole machine/network and avoid the need to implement mapping in every different application, so I highly recommend it where possible.

Additional warning: Principal names are considered case sensitive by the reference implementation (MIT Kerberos) but some implementation treat them in a case-insensitive way (Active Directory for example). It is safer to always treat principal names in a case sensitive way. (Active Directory will generally always provide the canonicalized form in tickets although it may accept mismatching cases when requesting tickets).

Hopefully this brief explanation will be useful to understand how to deal with principals and key tabs to the casual programmer that cares more about the practical implications rather than the abstract semantics and technicalities of the Kerberos protocol.

Why depending on DNS Reverse resolution is bad

I have been recently involved in a discussion about why I go around trying to stop applications from using and sometime even depending on DNS Reverse resolution (PTR records lookups).

There are 2 main reasons:

Let's start from the first point, which is easy to argue about. In a lot of cases the person setting up a service is not the same person controlling the DNS. Even more the DNS person/organization controlling the Forward Zone may not be at all the same one that controls the Reverse Zone.

This is true for the general internet usage (try asking your ISP to set a special PTR record for your residential public IP address ... laughs) but also for some corporate environments where the Network Ops may be so separate from the user installing a machine and rules to ask changes to DNS so complex that it is sometime simply too inconvenient to ask for changes, especially in temporary settings like Proof of Concept trials, etc.. This is not hypothetical, in my past life as consultant I've seen it all, and I can tell PTR records are broken more often than not.

So by this reason alone depending on a PTR record to obtain the actual name of a server is a pretty high bar and will inevitably be a barrier for adoption. It gets to silly levels if an application actually gets the 'right' name in input and then translates it into an IP only to attempt reverse resolution and fail. Users legitimately get pissed the app is so stupid as to throw away the name they just gave it. I just gave the name to you! Don't you see it!

It is surprising how many applications do this silly game when it comes with providing the target name to GSSAPI, which introduces the second point.

Why is it bad from a security point of view ? We understand that it is unfortunate for cases were reverse resolution is broken, but if reverse resolution is properly configured what is so bad depending on it ?

This is a little scenario I wrote up on the linux-nfs mailing list to explain how the fact rpc.gssd (the client that handles GSSAPI authentication on the kernel behalf in user space for the nfs client module) depends on reverse resolution can actually be exploited by an attacker.

Assume the following scenario:

Note that Eve does not need to be controlling any of the servers, and it is sufficient for her to be able to spoof DNS replies.

Now the attack: Eve wants to fool Alice's computer to mount the public server's NFS share instead of the secure server one, so that the automatic backup job will copy Alice's secret documents to the public server where Eve has read access and can grab a copy.

Normally this is not possible, because the Kerberos protocol implies mutual authentication. Not only the user authenticates to a server by using a ticket, but the ticket is only usable by the right target server, therefore authentication fail if either the user or the server are not who they say they are.

In our case normally Alice will grab a ticket for nfs@secure.server.name (GSSAPI Naming notation), which can be used exclusively to authenticate against the secure server. If Eve tries to redirect communication to the public server the authentication will fail because the public server is not able to decrypt the ticket.

However, rpc.gssd does a very bad thing(TM). When the client runs the mount command it uses the name provided on the command line to obtain the server's IP address, then ignores the fact we already have a name and proceeds to perform a reverse lookup to 'find' the server name.

What this means is that Eve can simply spoof the DNS to redirect Alice's computer to contact the wrong server and then later rpc.gssd will 'find' that the 'real' name of the server is public.server.name (Either because Eve spoofed the original forward resolution reply or by spoofing the reverse resolution reply later on).

Now Alice's computer will call into GSSAPI with the constructed name of nfs@public.server.name and when it connects to that server mutual authentication is successful because the ticket can be decrypted by the target server.

Eve just waits for Alice's computer to complete its backup on the wrong server on which she has read access, and grabs the documents.

This type of attack obviously is not limited to the NFS protocol but can be performed against any client that trusts DNS Reverse resolution to determine the target server's name. It is also not limited to GSSAPI, an SSL client might also be fooled the same way if it doesn't check the name that was provided in the URL but instead uses DNS Reverse resolution to validate the server certificate. Luckily I am not aware of any client doing that for HTTPS at least.

And that is all folks!

IMAPD via SSH and Thunderbird

I have been using Evolution for many years, and one of the key features that kept me using it was the ability to run imapd on another machine via ssh. This was done using a simple command in Evolution's option:

ssh -l <user> <server> exec /usr/sbin/imapd

This ssh command will allow Evolution to connect directly to a pre-authenticated imapd process on my server avoiding the need to run a network facing service and the need for password based authentication. Everything is accessed via my ssh connection that uses key based authentication

(the option is not directly available anymore and you have to fiddle with gsettings to use it now, which is a real shame as it is completely undiscoverable.)

I recently decided to try out Thunderbird again and found out that this is one of the features that is still missing, after all these years ...

This was a blocker for me, so I decided to find a workaround that would allow me to use Thunderbird and still use ssh to reach the imapd daemon on my server, like I have done for the last decade.

After some tinkering and reading on all the SSH options for Nth time I came to the conclusion that ssh alone cannot run a remote command and wire it's STIDN/STDOUT to a local port even though it can do pretty much any other forwarding you may think of, including forwarding your local STIDN/STDOUT to a remote host/port ... a real shame.

The most I could achieve was to make SMTP available this way, as I do have an MTA listening to an actual TCP port on the server. Making the MTA available is easy, you just need to run the following command on your client:

ssh -f -N -C -L 10025:localhost:25 -o ExitOnForwardFailure=yes -l <user> <server>

This command, makes available locally on port 10025 the server's port 25 through a simple forward on a SSH encrypted channel. The -f and -N options, are used to put ssh in the background without running any command or shell. The -C option turns on compression and the ExitOnForwardFailure option makes ssh fail to start if it cannot establish the forwarding. This way if I run the command multiple times only one tunnel stays up as the other shells will simply silently exit.

This is quite cool already but doesn't solve my imap problem, to solve it I need to employ one of those little know yet very powerful tools available on Linux (and other *nix OSs as well): netcat

The version I have installed is the one distributed with the Nmap project.

Netcat (ncat or nc) is an incredibly useful tool. I've used it countless of times for all sort of things over the network. And it is the perfect tool to solve my problem when used this way:

ncat -k --sh-exec "ssh -C -l <user> <server> exec /usr/sbin/imapd" -l localhost 10143

This command does a wonderful thing. It keeps (-k) listening (-l) on the local port 10143 and every time there is a connection it will run the command provided by the --sh-exec option in a shell and wire it's STDIN/STDOUT to the connection that has been just opened over TCP.

This is exactly what I needed. Now every time Thunderbird connects to my local port 10143, netcat will run the ssh command that will connect to the remote server as my user and run the imapd server.

Although Thunderbird's configuration doesn't seem to allow for 'non' authenticated connections, everything seem to work fine if I just leave the password empty. (Remember the imapd server is pre-authenticated via my ssh connection as my remote user and requires no additional authentication)

So what is missing here ? The Security paranoids among my readers should have spotted one glaring issue! Everybody on my local machine can now connect to my local port 10143 and access my remote mailbox without authentication!!

Let me fix that with a single firewall instruction:

iptables -A OUTPUT -p tcp --dport 10143 -d 127.0.0.1 -m owner ! --uid-owner simo -j REJECT

Yep, it is a simple as that (on Linux at least). But what does it do ?

This command uses a very nifty feature of iptables that allows the kernel to recognize who is the owner of any outbound connection and will prevent any connection to port 10143 for any user on the system that is not me. Of course iptables filters any non local connection to my machine as well.

Problem solved!

Now I can start playing with Thunderbird and see what else I need to tweak to make it useful for me (one thing I already found is an add-on to import/export entire folders, a feature I always wanted and missed in Evolution)

Talking to people

The new year started with a lot of talks at various conferences.

For the past few years I had slowed down on attending conferences, but this year started with my attendance to 2 conferences I like a lot.

The first is FOSDEM, probably the best Free Software conference and certainly the biggest one in the world.

I just love FOSDEM, I love Belgium for the Beers and Chocolate, so it is always a pleasure for me to go there. Plus I have friends in Brussels, where I have been multiple times in the past so it is always a pleasure to go back there for a full immerse week end.

This year I presented 2 talks at FOSDEM.

One in the main track about Identity Management on Linux and a second in the Legal Devroom about a Veteran's perspective on various legal matters surrounding Free and Open Source Software. I organized this talk as an open discussion between me and the public and I absolutely loved the conversation.

The IdM talk in contrast was a classic solo speech on a 30 kilometers high overview about the problem of building an IdM system on Linux and for Linux. It does have references to the FreeIPA project but does not go in deep technical details beyond explaining why we choose certain technologies.

This actually led to criticism after the talk: Not technical enough!

And it is a fair one, too bad that when I presented the initial abstract to FOSDEM I got the opposite reply: Too technical!, so I had to water down and broaden the initial proposal :-).

I guess you can never win this game, so my resolution is to oscillate between the two extremes ...

... which brings me to the other talk at DevConf.cz. This is a very nice conference, organized by Red Hat in Brno.

DevConf.cz is a developer conference so I presented a pretty technical talk on GSSAPI and privilege separation using Gss-Proxy which is the latest project I launched together with Nico and later the help of Günther.

This time I got the: Too technical! red flag. Hopefully, though, it was still interesting enough for the audience.

All in all, I enjoyed these conferences very much, I won't list all the excellent talks I attended, there were too many. Most importantly I was able to finally meet face to face with some people I interact every day or I needed to have a more interactive discussion to hash out some problems an ideas. So fun and very productive time, what more can you ask for as a nerd type ?

What a great year!

This past year has been really great, too bad I found little time to update my blog :-)

A few things happened that made me cheer up while thinking about what has been going on this year.

Samba 4.0 finally happened. It has been an incredible, long ride, with highs and lows but amazingly we pulled it off!

FreeIPA 3.0 and 3.1 with AD cross-forest trust integration also were released this year. I am so proud of this project, it has achieved results I hardly hoped for when I started it a few years ago.

SSSD has seen multiple releases with the 1.8 Long Term Maintenance series and 1.9 series. SSSD is one of the most successful projects I started these past years and I used it every day myself with great pleasure.

Gss-Proxy is the last project I started, just this year, and has seen 2 initial no-fanfare releases. It is one of those plumbing things that are hardly seen (except when things break :-) but it was exciting to work so deep into GSSAPI code.

Kerberos: delegation and s4u2proxy

One of the most obscure parts of the Kerberos protocol is delegation. And yet it is a very powerful and useful tool to let "agents" work on behalf of users w/o fully trusting them to do everything a user or an admin can.

So what is delegation ? Simply put is the ability to give a service a token that can be used on the user's behalf so that a service can act as if it were the user himself.

In FreeIPA, for example, the web framework used to mediate administration of the system is such an agent. The framework on it's own has absolutely no privileges over the rest of the system. It interacts almost exclusively with the LDAP server and authenticates to the LDAP server using delegated credentials from the user that is sending in the requests.

This is possible because through Kerberos and GSSAPI it is possible to delegate user's credentials during the Negotiate exchange that happens at the HTTP layer when a user contacts the Web Server and authenticates to it.

How does it work ?

Before we answer this question we have to make a step back and explain what kind of delegations are possible. Historically only one kind of very inflexible delegation was really implemented in standard Kerberos implementations like MIT's or Heimdal's. The full delegation (transmission) of the user's krbtgt to the target service.

This kind of delegation is perfect for services like SSH, where the user wants to have full access to their own credentials after they jumped on the target host, and they generally remain in full control of them.

The drawback of this method is that by transmitting the full krbtgt we are now giving another host potential access to each and all services our user has access to. And while that is "powerful" it is also sort of overly broad in many other situations. the other minor issue is that normally KDC's do not have fine grained authorization attached to this feature, meaning that a user (or often more generally a program acting on the user's machine) can delegate these credentials to any service in the network, w/o much control from admins.

Enter S4U constrained delegation

Luckily for us Microsoft introduced a new type of "constrained" delegation normally referred to as S4U. This is an extension to the age old Kerberos delegation method and adds 2 flavors of delegation each depending on the KDC for authorization; they are called Service-for-User-to-Self (S4U2Self) and Service-for-User-to-Proxy (S4U2Proxy).

Service-for-User-to-Self

S4U2Self allows a service to get a ticket for itself on behalf of a user, or in other terms is allows to get a ticket as if a user requested it using it's krbtgt form a KDC and then contacted the service.

This option may seem of little use, why would a service care for a ticket to itself ? If it is asking it, it already knows the identify of the user and can operate on its behalf right ? Wrong.

There are at least 3 aspects that makes this function useful. First of all you get the KDC to give you a ticket and therefore validate that the user identity actually exist and is active. Second it may attach a MS-PAC (or other authorization data to the ticket, allowing the service to know, form an authoritative source, authorization information about the user. Finally, it may allow the service to do further actions on behalf of a user by using S4U2Proxy constrained delegation on top.

All this is possible only if the KDC allows the specific service to request S4U2Self services. This is an additional layer of authorization that is very useful to admins, it allows them to limit what services can use this feature.

Service-for-User-to-Proxy

S4U2Proxy is the actual method used to perform impersonation against a 3rd service. To use S4U2Proxy a service A that wants to authenticate to service B on behalf of user X, contacts the KDC using a ticket for A from user X (this could also be a ticket obtained through S4U2Self) and sends this ticket to the KDC as evidence that user X did in fact contact service A. The KDC can now make authorization decisions about whether to allow service A to get a ticket for service B in the name of user X. Normally admins will allow this operation only for services that are authorized "Proxies" to other services.

In FreeIPA we just switched to using S4U2Proxy in order to reduce the attack surface against the web framework. By using S4U2Proxy we do not need the user to delegate us a full krbtgt. By doing this we allow the web framework to effectively be able to operate against the LDAP server and no other service in the domain

These 2 delegation methods are available now both in MIT's and Heimdal's Kerberos implementations. In MIT's case (which is the implementation we use in FreeIPA) it is really possible to use these features only if you use an LDAP back-end (or in general a custom back-end that implements the necessary kdb functions. The native back-end does not have support for these features, because it lacks meaningful grouping methods and Access Control facilities to control them.

In coding up the support for FreeIPA we ended up fixing a few bugs in MIT's implementation that will hopefully be available for general use in 1.11 (We have back ported patches to RHEL and Fedora). We also had to modify the Apache mod_auth_kerb module to properly deal with S4U2Proxy, which requires the requesting service to have a valid krbtgt in order to send the request to the KDC. Something mod_auth_kerb did not need before (you do not need a krbtgt if you are just validating a ticket).

Conclusion

S4U constrained delegation is extremely useful, it reduces attack surface by allowing admins to effectively constrain services, and gives admins a lot more control about what users can delegate to. Finally it also makes clients simpler, and this is a key winning feature. In the classic delegation scheme clients needs to decide on their own whether to delegate a krbtgt, which ultimately means either asking the user or always/never do it. And given it is quite dangerous to liberally forward your ticket to random services the default is generally to not delegate the krbtgt, making it very difficult to rely on this feature to make powerless agents. With S4U the user only needs a Forward-able TGT, but does not need to actually forward it at all. This is a reasonable compromise and does not require applications to make choice on user's behalf, nor to make user's need to make any decision. The decision rests on admins to allow certain service or not, and is taken generally once, when the service is put in production, greatly reducing the burden to administrators and the risks involved in the traditional delegation scheme.

Code Reviews, Quality and Coverity Results

As SSSD 1.5.0 is hitting the street, I want to give some background on how we deal with code, reviews and quality in SSSD.

NOTE: if you just want to see the Coverity results, feel free to jump to the end of this long post :-)

When I helped jump-starting this project one of the things I wanted to try out was a very strict Review Policy for a few reasons.

One of the reasons was consistency. Previous projects I participated in had very lax policies about pretty much everything. Style, review, quality, where not strictly enforced, and this is felt as a way to keep the barrier to entry low. In my experience though, the inconsistent style, unclear direction, poor or no review, in the end caused other different barriers to a new developer.

Lack of enforced consisting style makes code difficult to read, especially when you have to read a pair of interacting functions that are written in wildly different styles.

Lack of required reviews helped creating an environment in which contributions from outside are not promptly commented upon. The developer with commit access is used to throw in pretty much everything without having to wait for someone to review. This makes core developers forget how painful it is waiting for a review that is never happening. This in turn can discourage new developers that do not have direct commit access from proposing patches as they see too little feedback and do not feel properly engaged.

Finally quality is something I think suffers a lot from lack of review. Developers that do not have to stand review tend to become more relaxed, code is thrown in without much thought, as long as it doesn't break the build. But breaking the build is a pretty low standard. So often the way a function perform operations, the semantics, are implicitly assumed by other code. Reviews, in my experience, tend to expose the same piece of code to different point of views, and expertise within the project. Things that seem innocuous are pointed out and both developers at the end of the process gain both more knowledge of each other points of view, and more knowledge in general about the piece of software they are modifying. Usually the net result is that in the mid/long term code quality improves significantly.

When you use a common SCM tool, like git, code reviews can happen in two ways. Review before commit or Review-Commit (R-C), and review after commit or Commit-Review (C-R). In SSSD we use the former. Patches must be reviewed and acked by a second developer before they can be committed.

R-C is generally thought as a stricter method, but I find it much better than C-R.*

In my experience with the C-R method the reviewer is encouraged to do sloppier and cursory reviews and just give acks unless something really stands up as very ugly. Patches regularly slip past review during phases where a lot of churn happens. Long patches tend to get the least review (exactly when reviews are more important). People are less engaged. Also because the code is already committed, bad patches can cause a lot of bad feelings, the patch is seen as breaking the code, reverts are called for and the author may feel embarrassed or angered by how they are being treated.

R-C instead assures review is done, more importantly it requires active intervention from the reviewer. This in turn makes it less problematic to actually comment on all aspects of the code even minor ones. Of course it also risks abuse from obsessive nitpickers, but in general lets people speak frankly of the code, and request the appropriate corrections be done or the code will not be committed. The patch is never seen as breaking anything, as it is not committed yet, so you rarely see that added anxiety, pressing and bad feelings that rise when a fix is needed asap. The patch creator have all interest in fixing the issues and learning why they were issues in the first place, and resubmit a better patch, without pressure or embarrassment.

I found this aspect to be fundamental in helping new developers get up to good code standards quickly. Not only people does not get frustrated by poor commits that need to be "fixed" asap. But the interaction between more senior developers and younger ones benefits both greatly. On the one hand the younger developer gets access to the insights of the more experienced developer. They get to understand why the patch is not OK and how it need to be improved to be made acceptable. On the other hand the more experienced developer gets a grasp of what parts of the code are really difficult to deal with for younger ones. Sometimes you are so used to do things one way that you don't realize that they really are pain points and needs refactoring to make them usable.

Also because all developers are submitted to the same regime there are no 'elites' that escape review. And this prevents bad feelings when a patch takes some more time to get approved. It generally also prevent the 'elite' from looking down on new developers. Or other similar 'status' issues. Of course there always developers that are more authoritative, but that authority is earned on the field and maintained through reviews

Arguably all these arguments are strongly biased by my personal view of things, I definitely do not deny that, but is there a metric that can tell whether I was right or wrong in some respect ?

Coverity Results seem to give some interesting insight.

We have been running Coverity a couple of times during the 1.2.0 development cycle, using spare cycles of an internal Red Hat instance. 1.2.0 was an important release for us because it was going to end up in RHEL 6.0 so we wanted to find and fix as many critical bugs as possible.

The first time, ever, that we ran Coverity on the SSSD code base gave us back a defect density of 1.141 bugs per thousand lines of code. After removing the false positives we were down to 0.556 bugs per thousands lines of code.

This was an astounding result. As you can see in the 'Coverity Scan: 2010 Open Source Integrity Report' the mean of defects for the software industry is around 1 defect per thousand lines, and the mean for first scan is usually much higher. Also looking at the 2006 report the mean for the top most 32 open source projects was around 0.4 defects per thousands lines. So we were pretty close to that metric too.

Of course we fixed most of the bugs that were found and a second scan of the 1.2.1 release revealed a defect density of 0.029 bugs per thousands lines. I call that impressive (and if you know me you know I am not someone that easily shows enthusiasm).

That was all and well, but we didn't have further access to Coverity until recently. During the release of 1.5.0 we got access again to Coverity scans, so we ran the tool to find out how we fared.

Before spitting numbers I have to say that the comparison against 1.2.0 is a bit skewed because we forked off a set of basic libraries that now live in their own tree.

1.2.1 had ~ 74k lines of C code alone and the libraries we forked off constituted ~12k lines of that code. In 1.5.0 we have ~ 65k lines instead. So we roughly lost 12k lines and gained 3k lines total. The amount of code change is quite a different thing though. Using git, I can see that the removal of the libraries amounted to roughly 34k deletions (this counts also makefiles, comments, blanklines, etc... that's why it is different than the 11k LOC numbers I gave above) while the diffstat of the diff between 1.2.1 and 1.5.0 gives ~ 73k deletions and 56k additions. So quite a bit of changes happened on that code base after all.

In mid December we scanned the code base, roughly 6 months after the release of 1.2.1, and the results were again astounding: 0.189 bugs per thousand lines. In total 24 defects, 20 real, and 4 false positive. And a week later the we were down to 0 (zero) outstanding defects.

These numbers tell me that our code quality is quite good, and although I can't claim a causal effect, I believe that our review strategy is to be accounted for much of it.

Finally, Congratulations to all SSSD developers. You've done a fine job guys, quite a fine job!


* - I have to say that w/o git R-C would be probably too painful, but git let's you manage the code so easily that R-C has become much simpler and doesn't block a developer as he can keep piling patchs on top of his own repository while waiting for the review, and later easily use the rebasing features of git to fix whatever need fixing quite easily.

SSSD a tale of community, collaboration, success!

Today a long development cycle started more than a year an half ago comes to a conclusion with a great release: SSSD 1.2.0 is out!

First of all I must say I am extremely proud of the team. When I started the project in September 2008 I knew where I wanted to go, and I knew it would have been a long journey. But I didn't really know how the trip would be.

Looking back at the first days it seem magic what we achieved, so much was unknown, so high where my expectations, I almost feared I couldn't live up to them myself. But thanks to Steve, Sumit, Jakub, Martin and others the project grew, matured and now SSSD is going to be shipped in the forthcoming RHEL 6 release.

Since a few months ago Steve really took over the release management role and he's done an outstanding job. The SSSD 1.2.0 release has his name all over. The dedication he showed is truly remarkable. Thanks Steve!

Beside the more dedicated developers I also have to thank a lot of people that put SSSD under stress and tested it in real deployments, since the the early 0.x releases.

One of the most important factors for the success of a FOSS project is the formation of a community of people that can work together in a very cooperative way. All these people not only reported bugs but also patches and most importantly had the patience to interact and test fixes, make requests, discuss needs and expectations. A great positive feedback loop; extremely motivating! I can say beyond any doubt that without them SSSD wouldn't be even close to where it is now.

THANK YOU contributors, all of you!

Of course a new cycle opens now, as new releases are already waiting in the pipeline, but it is a good moment to stop and look at what has been done.

SSSD is something that I have been thinking about in various forms since I started working for Red Hat more than 3 years ago, and in vague forms way, way before that, back when I was still doing consulting jobs in Italy. Since I started formalizing it within Red Hat it was called in many ways (one of the stickier names we used internally for a while was "Blue Box"), and was often thought as a piece of the puzzle we call FreeIPA. You can still probably find references to it in the older design plans on the FreeIPA wiki.

So what can SSSD do today?

The most interesting features are related to the primary use case we've been working against. LDAP servers and Kerberos authentication.

SSSD works like a connection pooling an cache mechanism for a client. It will provide the machine with users and groups fetched and cached from the central server. Plus it adds neat feature like offline authentication, a real boon if you want to use LDAP and laptops at the same time, but in general a great feature if you have remote machines behind a slow or unstable link and you want to take sure your users can keep working if the connection goes temporarily down. It frees you from the need to put an LDAP replica in a remote office just for a very few users.

SSSD has a modular multi-process design, it has been built with resilience and robustness in mind, a very small process controls a bunch of children that handle specific tasks. If any component dies, the monitor restarts it to avoid service disruption. (although I have to say that it has been many many moons since I had an issues on my machines, and that's just great).

SSSD is built with frontends to handle NSS and PAM communication, and backend providers to handle access to remote servers, plus a file based mmaped cache that works as a unifying glue to store and retrieve data. Multiple different backends can be configured, to retrieve user information and perform authentication. And many of these modules can be combined together like in the case of the IPA backend that is substantially an LDAP identity provider plus a Kerberos authentication provider.

Much more could be said, but I think this is enough to ignite some curiosity for now ;-)

For the interested people I can say SSSD has been shipped in Fedora for quite a while now, but only recently authconfig was modified to make it simpler to configure it with the upcoming F-13 release. The integration is already quite nice and we hope to improve it even more in future. Although other distributions have already packaged it and will hopefully ship it soon as a first citizen too.

Last but not least, I must also thank Red Hat for believing in this small project and funding most of its development so far. Red Hat is a great place to stay if you want to develop core infrastructure technology.

Convincing a Windows Domain that we are trustworthy

In the last few weeks I have been working on trying to find out how to make a Windows 2008 R2 Domain Controller, trust a Samba domain so that it would consider us able to handle PACs and therefore send us them.

This is different from the normal MIT Kerberos level trust. When using those trusts, the Windows DC does not expect the other Realm to be able to send PACs, understand PACs, or understand routing information for transitive trusts.

In order to set-up a cross-realm trust you need to make the Windows DC believe we are actually just another Windows Domain, with all the bells and whistles of a Windows Domain. Well, actually not all of them, and discovering exactly which ones was my goal.

As usual with Windows, there is a lot of redundancy and different ways to do things. Depending on the way you try to set up cross-forest trust relationships you will cause different netlogon RPCs to be issued.

After implementing some of them, and finding that some others were hard, I tried to find a way to reduce the amount of calls we need.

It turns out, as one might expect, that creating the 2 half of the trust on each DC would be easier as only the verification process is left. What I did not expect was that that was going to be true even if the tool used to create the trust on the Samba side, was actually a Windows 7 box.

Long story short, after fiddling around and hacking up some netlogon calls obscenely, in some cases hard coding server names in there, I was able to convince the Windows DC that we were indeed a trustworthy realm. Something to which it could route kerberos packets including the PAC.

At the same time the Samba domain KDC was able to parse the PAC and use the cross-realm trust account password to release tickets. And the Windows side was able to use this to seamlessly access the Samba fileserver.

For the moment it is all a big hack, and I have tested it only with a one way trust relationships (the Samba domain trusts the Windows domain but not the other way around). Yet it allowed me to finally confine the problem and understand exactly what is the minimum set of calls we have to answer and, most importantly, what we are supposed to answer.

Because of the hacks this code won't go in any tree for now, but it the base I need to plan the next steps. There is a lot of work to do before we have the mechanism needed to substitute the hacks with the proper actions the Samba server needs to take.

Ah, almost forgot, while researching this matter I also found interesting oddities and some protocol issues. Those always spice up your day, as it derails all your work and distracts you from your path until you understand and then solve or work around the problem. Usually just to fall into a new one a few days later, just as soon as you got back to the thread and remembered what were you actually doing and expecting to happen, so that you can happily forget it all over again. But this is also fun if you can take it philosophically :-)

Samba + MIT Kerberos, first steps are done

I've been working on rebasing the samba patches to be able to push them upstream. After some quite deep rebasing work I was able to push all of the changes required to common code. And the amount of changes were surprisingly small all considered.

Today I finished nailing the last bits in the samba and mit sides of the plugin implementing both policy checks calls and constraint delegation calls. I will propose the patch to both upstreams soon.

Meanwhile my focus has been shifting toward Cross-Realm trust relationships and in particular External and Forest trusts in AD parlance, both one-way and two-way

Unfortunately the samba 4 code still does not support cross-realm trust so I had to use 2 Windows 2008 Servers to do my experiments.

The amount of calls that need to be implementd does not look too big, although the devil is always in the details. It even seem that there is some code already available but it is not fully patched in. As we stand, A Windows DC is able to actually create the trust domain object in Samba's Database, but then Samba fails to reply to some queries about it and to setup Schannel over RPC to validate the Trust from the Windows pov.

I am considering working on Samba 4 to get it to work in a trusted realm scenario, but I still need to do some more research first.

Habemus PAC

After some working, digging, changing, re-changing, fixing, cursing, fixing it again, I've got MIT Kerberos and Samba collaborate also on the PAC front.

Today I was able to login on Windows 7 using the MIT KDC, and all players are happy.

This was an important mileston, although the job is certainly not finished. I still have to implement one important authorization function, and go over a few todo's in the code.

But as with all milestones this was very satisfing, even more so because it took me a lot less than I anticipated, and I usually underestimate :-)

Well, that's it for today!

Hurray! Got the first ticket from MIT Kerberos + Samba 4

It is always a sweet feeling when things go the way you like, and fast too!

After just a week working around this Chimera, today I was able to tame the beast. I made krb5kdc return a TGT reading all data off of samba4 internal database.

I can't feel anything but triumph. It is true that it is not that much after all, but I can't help feeling happy for the result. This effort has been put off for so long and deemed so difficult that I was very pleased to find out it wasn't too difficult after all.

Of course the job is not done. The impedance mismatch between Samba 4's embedded Heimdal and MIT Kerberos interfaces forced me to defer adding the PAC. Without the PAC, the nice Windows 7 refuses to log you in of course, but that was expected, so it didn't bother me in the least.

Adding the PAC is not difficult, and all the code I need is in Luke's HDB Bridge code, which provided also most of the guidance and code I needed for this effort.

Without Luke's code this effort would have been much more difficult indeed. The code itself is not very complex, but the knowledge of both project internals was needed and Luke provided the knowledge I missed on the MIT kdb plugin side.

I hope to have a hacky prototype able to add the PAC using Luke's code next week. Once I can make Windows work with this code, I will actually start working on trying to get a little bit cleaner interfaces within Samba so that I can reduce the dependency on the Heimdal code hacks in the bridge code.

PS: if you want to see the work you can pull the code from these 2 branches:
git://git.samba.org/idra/samba.git
git://git.samba.org/idra/krb5.git

mating samba and MIT Kerberos

Just before the holidays I started working on a new project to mate Samba 4 and MIT Kerberos.

Samba 4 embeds a copy of Heimdal Kerberos, and I want to use MIT instead as that’s what is ditributed in RHEL and Fedora and it is the implementation of Kerberos we use in FreeIPA.

Samba 4 is basically one gigantic mess of spaghetti code (No it is not that bad, but dependencies are intricate :-)

Because it embeds the Heimdal KDC it also uses the Heimdal client library and it conflicts with the MIT Kerberos one of course. So here I am building a plugin that can act as separation layer that will, hopefully, keep the namespaces separated (thanks RTLD_LOCAL).

It is going to be an interested ride.

(if you want to take a look feel free to check my personal samba git repo on git.samba.org and soon I will also publish the krb5 repo with the other half somewhere too …)

Fun with Wiimotes

Today I saw this YouTube video and got intrigued again about playing with my Wii Remote.

So I searched around and found 2 useful projects.

The first is the wiiuse library, although a bit buggy and with horrible dosmode files (carriage return at the end of each line) I quickly packaged and even submitted a review bug to push it into Fedora.

The second is an even crueder program called XWii. The code is a bit horrible, but I was able to quickly hack it to do a few things including mapping multimedia volume keys to minus/plus/home Wiimote buttons, and a bit finer control to be able to use a wiimote as a real IR mouse. There is a lot of work to turn this program in a state where I can consider proposing it as a Fedora package.

If I can find time on weekends I plan to buy a few infrared leds and play a bit with my wiimotes and my video projector. If all goes well I might rewrite xwii in C as a real daemon and propose it as a package for Fedora. But no promises, this new year looks like I am going to work hard on a few work-related projects, so it may take quite some time or forever ...

Holidays are awesome

Spent awesome holidays with relatives.

As usual the best is food. I love holidays food, especially Italian holidays food

Today Luana and her parents made something special: Home made gnocchetti with beans. They brought home made sausages too, and this evening we are going to have home made pizza.

Total Bliss

Bye bye Evolution - welcome claws-mail

After more than 8 years of service I finally abandoned evolution for my work mail.

I used evolution with great satisfaction for many years, but recently it has got in the way.

I am still using evolution for my personal email on my personal desktop but not for work anymore.

Many small things got worse in the last year. Calendaring with Zimbra stopped working even half decently no less than 2 moths ago, changes in the way messages are displayed or threaded I didn’t like a bit and got in my way of managing email.

As of late I kept using evolution mostly for the integrated calendar, although it never worked perfectly it was still decent and the best compromise I could find. But since calendaring stopped working (appointments alarms do not fire, evolution prevents me changing stuff etc…) the last reason to keep sticking to evo faded.

So I looked out again and decided to give another try to claws-mail. Last time I tried it was 3-4 years ago and compared to evolution it was simply way to poor in features.

But recent releases are just all evolution should be for me. The only thing claws-mail lacks is the graphical polish evolution has. But I can live with an ugly tool as long as it does the job.

And claws-mail just does the job.

It lets me configure just about every single behaviour and every single item in the interface the way I like. I finally found again the joy of configuring a tool so that it maximized my way of doing things instead of having to bend my habits to a rigid tool like evolution is gradually becoming.

If I were to make the usual dreaded car analogy, evolution is more and more looking like the typical cheap sports car that has a very nice line and nice features, but is fragile.

Claws is more like a Van or a Truck, it ain’t pretty, but when I have to use it for my work it just is about perfect, it get’s the job done, without fear of scratching the paint either.

Claws-mail lacks a decent calendaring support and has no way to integrate with Zimbra, and the optional calendaring plugin it has is pretty poor, but given evolution is broken in that regard I can hardly say that’s a show-stopper. For calendaring I am now also experimenting again with sunbird.

But for mail it supports all I need, GSSAPI auth works, IMAP works great and looks like offline support works as well. Search does its job and so on.

But again the main feature is that I was able to configure just about any aspect I wanted.

I can tell it exactly how to behave when I change a folder (I prefer it to select the last mail I’ve read)

It doesn’t jump hectically when I get in a folder just because I like to keep new mails at the bottom and not at the top like evolution does.

It is generally faster at rendering messages.

One difference with evolution is how it manages attachments, I think I like how it does it though. Does not cause again all the view pane to flicker like evolution does just because it has to recalculate the page layout to show the attachment content when you select it.

In short I got in love by how well it configures and although it lacks calendaring and it is not much multithreaded (sometimes you have to wait for another operation to finish) it looks solid and didn’t have a problem with my multi-gigs IMAP repository.

All in all, right now I feel it much power-user friendly, and is making my use of email enjoyable again like it was with evolution up to 2-3 years ago.

Let’s see how long the honeymoon will last :-)

Me

me

My name is Simo Sorce, and I am currently employed as a software engineer in the Red Hat Crypto team.

Quotes

All human beings are born free and equal in dignity and rights.They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.

Democracy is when the indigent, and not the men of property, are the rulers.

Justice in the life and conduct of the State is possible only as first it resides in the hearts and souls of the citizens.


W3C HTML 4.01 
W3C CSS 2.0     

Powered by VIM.