Distributed firewalls are host-resident security software applications that protect the enterprise network's servers and end-user machines against unwanted intrusion. They offer the advantage of filtering traffic from both the Internet and the internal network. This enables them to prevent hacking attacks that originate from both the Internet and the internal network. This is important because the most costly and destructive attacks still originate from within the organization. They are like personal firewalls except they offer several important advantages like central management, logging, and in some cases, access-control granularity. These features are necessary to implement corporate security policies in larger enterprises. Policies can be defined and pushed out on an enterprise-wide basis.
GUI- Graphical User Interface, IETF- Internet Engineering Task Force, IKE- Internet Key Exchange, SSL- Secured Socket Layer, TCP/IP- Transmission Control Protocol/ Internet Protocol.
Traditional perimeter firewalls are a critical component of network defense, but they should not be considered the only line of defense. First, their protection is too coarse. This leaves the firewall helpless against the malicious insider, who operates freely within the firewall’s security perimeter. A distributed firewall, uses a central policy, but pushes enforcement towards the edges. That is, the policy defines what connectivity, inbound and outbound, is permitted; this policy is distributed to all endpoints, which enforce it.
In the full-blown version, endpoints are characterized by their IPsec identity, typically in the form of a certificate. Rather than relying on the topological notions of "inside" and "outside", as is done by a traditional firewall, a distributed firewall assigns certain rights to whichever machines own the private keys corresponding to certain public keys. A laptop directly connected to the Internet has the same level of protection as does a desktop in the organization’s facility. Conversely, a laptop connected to the corporate net by a visitor. With the possible exception of military-grade systems or networks it would not have the proper credentials, and hence would be denied access, even though it is topologically "inside". To implement a distributed firewall, we need a security policy language that can describe which connections are acceptable, an authentication mechanism, and a policy distribution scheme. As a policy specification language, we use the KeyNote trust-management system. As an authentication mechanism, we decided to use IPsec for traffic protection and user/host authentication. While we can, in principle, use application-specific security mechanisms, this would require extensive modifications of all such applications to make them aware of the filtering mechanism. Furthermore, we would then depend on the good behavior of the very applications we are trying to protect. Finally, it would be impossible to secure legacy applications with inadequate provisioning for security. When it comes to policy distribution, we have a number of choices: We can distribute the KeyNote credentials to the various end users. The users can then deliver their credentials to the end hosts through the IKE protocol. The users do not have to be online for the policy update; rather, they can periodically retrieve the credentials from a repository. Since the credentials are signed and can be transmitted over an insecure connection, users could retrieve their new credentials even when the old ones have expired. This approach also prevents, or at least mitigates, the effects of some possible denial of service attacks. The credentials can be pushed directly to the end hosts, where they would be immediately available to the policy verifier. Since every host would need a large number, if not all, of the credentials for every user, the storage and transmission bandwidth requirements are higher than in the previous case.
The credentials can be placed in a repository where they can be fetched as needed by the hosts. This requires constant availability of the repository, and may impose some delays in the resolution of request (such as a TCP connection establishment). Not all IKE implementations support distribution of KeyNote credentials. Furthermore, some IPsec implementations do not support connection-grained security. Finally, since IPsec is not in wide use, it is desirable to allow for a policy-based filtering that does not depend on IPsec. Thus, it is necessary to provide a policy resolution mechanism that takes into consideration the connection parameters, the local policies, and any available credentials (retrieved through IPsec or other means), and determines whether the connection should be allowed.
EVOLUTION OF DISTRIBUTED FIREWALL 
Conventional firewalls rely on the notions of restricted topology and control entry points to function. More precisely, they rely on the assumption that everyone on one side of the entry point the firewall is to be trusted, and that anyone on the other side is, at least potentially, an enemy.
Some problems with the conventional firewalls that lead to Distributed firewalls are as follows.
Due to the increasing line speeds and the more computation intensive protocols that a firewall must support; firewalls tend to become congestion points. This gap between processing and networking speeds is likely to increase, at least for the foreseeable future; while computers (and hence firewalls) are getting faster, the combination of more complex protocols and the tremendous increase in the amount of data that must be passed through the firewall has been and likely will continue to outpace Moore’s Law .
There exist protocols, and new protocols are designed, that are difficult to process at the firewall, because the latter lacks certain knowledge that is readily available at the endpoints. Although there exist application level proxies that handle such protocols, such solutions are viewed as architecturally "unclean" and in some cases too invasive.
Likewise, because of the dependence on the network topology, a PF can only enforce a policy on traffic that traverses it. Thus, traffic exchanged among nodes in the protected network cannot be controlled. This gives an attacker that is already an insider or can somehow bypass the firewall complete freedom to act.
Worse yet, it has become trivial for anyone to establish a new, unauthorized entry point to the network without the administrator’s knowledge and consent. Various forms of tunnels, wireless, and dial-up access methods allow individuals to establish backdoor access that bypasses all the security mechanisms provided by traditional firewalls. While firewalls are in general not intended to guard against misbehavior by insiders, there is a tension between internal needs for more connectivity and the difficulty of satisfying such needs with a centralized firewall.
IPsec is a protocol suite, recently standardized by the IETF, which provides network-layer security services such as packet confidentiality, authentication, data integrity, replay protection, and automated key management.
This is an artifact of firewall deployment: internal traffic that is not seen by the firewall cannot be filtered; as a result, internal users can mount attacks on other users and networks without the firewall being able to intervene.
End-to-end encryption can also be a threat to firewalls, as it prevents them from looking at the packet fields necessary to do filtering. Allowing end-to-end encryption through a firewall implies considerable trust to the users on behalf of the administrators.
Finally, there is an increasing need for finer-grained access control which standard firewalls cannot readily accommodate without greatly increasing their complexity and processing requirements.
COMPONENTS OF DISTRIBUTED FIREWALL
There are three basic components of a Distributed Firewall, They are further discussed: -
CENTRAL MANAGEMENT SYSTEM
Central Management, a component of distributed firewalls, makes it practical to secure enterprise-wide servers, desktops, laptops, and workstations. Central management provides greater control and efficiency and it decreases the maintenance costs of managing global security installations. This feature addresses the need to maximize network security resources by enabling policies to be centrally configured, deployed, monitored, and updated. From a single workstation, distributed firewalls can be scanned to understand the current operating policy and to determine if updating is required.
The policy distribution scheme should guarantee the integrity of the policy during transfer. The distribution of the policy can be different and varies with the implementation. It can be either directly pushed to end systems, or pulled when necessary.
The security policies transmitted from the central management server have to be implemented by the host. The host end part of the Distributed Firewall does provide any administrative control for the network administrator to control the implementation of policies. The host allows traffic based on the security rules it has implemented.
WORKING OF A DISTRIBUTED FIREWALL
Distributed firewalls are often kernel-mode applications that sit at the bottom of the OSI stack in the operating system. They filter all traffic regardless of its origin the Internet or the internal network. They treat both the Internet and the internal network as "unfriendly". They guard the individual machine in the same way that the perimeter firewall guards the overall network. Distributed firewalls rest on three notions: -
a) A policy language that states what sort of connections are permitted or prohibited,
b) Any of a number of system management tools, and
c) IPSEC, the network-level encryption mechanism for TCP/IP.
The basic idea is simple. A compiler translates the policy language into some internal format. The system management software distributes this policy file to all hosts that are protected by the firewall. And incoming packets are accepted or rejected by each "inside" host, according to both the policy and the cryptographically-verified identity of each sender.
Trust Management is a relatively new approach to solving the authorization and security policy problem. Making use of public key cryptography for authentication, trust management dispenses with unique names as an indirect means for performing access control. Instead, it uses a direct binding between a public key and a set of authorizations, as represented by a safe programming language. This results in an inherently decentralized authorization system with sufficient impressibility to guarantee flexibility in the face of novel authorization scenarios.
One instance of a trust-management system is KeyNote. KeyNote provides a simple notation for specifying both local security policies and credentials that can be sent over an untrusted network. Policies and credentials contain predicates that describe the trusted actions permitted by the holders of specific public keys. Signed Credentials, which serve the role of "certificates," have the same syntax as policy assertions, but are also signed by the entity delegating the trust applications communicate with a "KeyNote evaluator" that interprets KeyNote assertions and returns results to applications, as shown in Figure 2. However, different hosts and environments may provide a variety of interfaces to the KeyNote evaluator (library, UNIX daemon, kernel service, etc.).
A KeyNote evaluator accepts as input a set of local policy and credential assertions, and a set of attributes, called an "action environment," that describes a proposed trusted action associated with a set of public keys (the requesting principals). The KeyNote evaluator determines whether proposed actions are consistent with local policy by applying the assertion predicates to the action environment. The KeyNote evaluator can return values other than simply true and false, depending on the application and the action environment definition.
An important concept in KeyNote is "monotonicity". This simply means that given a set of credentials associated with a request, if there is any subset that would cause the request to be approved then the complete set will also cause the request to be approved. This greatly simplifies both request resolution (even in the presence of conflicts) and credential management. Monotonicity is enforced by the KeyNote language (it is not possible to write non-monotonic policies).
It is worth noting here that although KeyNote uses cryptographic keys as principal identifiers, other types of identifiers may also be used. For example, usernames may be used to identify principals inside a host. In this environment, delegation must be controlled by the operating system (or some implicitly trusted application), similar to the mechanisms used for transferring credentials in UNIX or in capability-based systems.
Also, in the absence of cryptographic authentication, the identifier of the principal requesting an action must be securely established. In the example of a single host, the operating system can provide this information.
Comment: Allow Licensee to connect to local port 23 (telnet) from internal
addresses only, or to port 22 (ssh) from anywhere. Since this is a
policy, no signature field is required.
Conditions: (local_port == "23" && protocol == "tcp" &&
remote_address > "159.130.006.000" &&
remote_address < "159.130.007.255) -> "true";
local_port == "22" && protocol == "tcp" -> "true";
Licensees: "dsa-hex:986512a1" || "x509-base64:19abcd02=="
Comment: Authorizer delegates SSH connection access to either of the
Licensees, if coming from a specific address.
Conditions: (remote_address == "139.091.001.001" &&
local_port == "22") -> "true";
KeyNote Policy and Credential. The local policy allows a particular user connect access to the telnet port by internal addresses, or to the SSH port from any address. That user then delegates to two other users (keys) the right to connect to SSH from one specific address. Note that the first key can effectively delegate at most the same rights it possesses. KeyNote does not allow rights amplification; any delegation acts as refinement.
In our prototype, end hosts (as identified by their IP address) are also considered principals when IPsec is not used to secure communications. This allows local policies or credentials issued by administrative keys to specify policies similar to current packet filtering rules.
In the context of the distributed firewall, KeyNote allows us to use the same, simple language for both policy and credentials. The latter, being signed, may be distributed over an insecure communication channel. In KeyNote, credentials may be considered as an extension, or refinement, of local policy; the union of all policy and credential assertions is the overall network security policy. Alternately, credentials may be viewed as parts of a hypothetical access matrix. End hosts may specify their own security policies, or they may depend exclusively on credentials from the administrator, or do anything in between these two ends of the spectrum. Perhaps of more interest, it is possible to "merge" policies from different administrative entities and process them unambiguously, or to layer them in increasing levels of refinement. This merging can be expressed in the KeyNote language, in the form of intersection (conjunction) and union (disjunction) of the component sub-policies.
Although KeyNote uses a human-readable format and it is indeed possible to write credentials and policies that way, our ultimate goal is to use it as an interoperability-layer language that "ties together" the various applications that need access control services. An administrator would use a higher-level language or GUI to specify correspondingly higher-level policy and then have this compiled to a set of KeyNote credentials. This higher-level language would provide grouping mechanisms and network-specific abstractions that are not present in KeyNote. Using KeyNote as the middle language offers a number of benefits:
It can handle a variety of different applications (since it is application-independent but customizable), allowing for more comprehensive and mixed-level policies (e.g., covering email, active code content, IPsec, etc.).
Provides built-in delegation, thus allowing for decentralized administration.
Allows for incremental or localized policy updates (as only the relevant credentials need to be modified, produced, or revoked).
Conditions: (@remote_port < 1024 && @local_port == 22) -> "true";
An example credential where an (administrative) key delegates to an IP address. This would allow the specified address to connect to the local SSH port, if the connection is coming from a privileged port. Since the remote host has no way of supplying the credential to the distributed firewall through a security protocol like IPsec, the distributed firewall must search for such credentials or must be provided with them when policy is generated/updated.
For our development platform we decided to use the Open BSD operating system. Open BSD provides an attractive platform for developing security applications because of the well-integrated security features and libraries (an IPsec stack, SSL, KeyNote, etc.). However, similar implementations are possible under other operating systems. Our system is comprised of three components: a set of kernel extensions, which implement the enforcement mechanisms, a user level daemon process, which implements the distributed firewall policies, and a device driver, which is used for two-way communication between the kernel and the policy daemon.
For our working prototype we focused our efforts on the control of the TCP connections. Similar principles can be applied to other protocols; for unreliable protocols, some form of reply caching is desirable to improve performance. In the UNIX operating system users create outgoing and allow incoming TCP connections using the connect(2) and accept(2) system calls respectively. Since any user has access to these system calls, some "filtering" mechanism is needed. This filtering should be based on a policy that is set by the administrator. Filters can be implemented either in user space or inside the kernel. Each has its advantages and disadvantages. A user level approach, requires each application of interest to be linked with a library that provides the required security mechanisms, e.g.,, a modified libc. This has the advantage of operating system-independence, and thus does not require any changes to the kernel code. However, such a scheme does not guarantee that the applications will use the modified library, potentially leading to a major security problem. A kernel level approach, requires modifications to the operating system kernel. This restricts us to open source operating systems like BSD and Linux. The main advantage of this approach is that the additional security mechanisms can be enforced transparently on the applications. As we mentioned previously, the two system calls we need to filter are connect(2) and accept(2). When a connect(2) is issued by a user application and the call traps into the kernel, we create what we call a policy context, associated with that connection. The policy context is a container for all the information related to that specific connection. We associate a sequence number to each such context and then we start filling it with all the information the policy daemon will need to decide whether to permit it or not. In the case of the connect(2), this includes the ID of the user that initiated the connection, the destination address and port, etc. Any credentials acquired through IPsec may also be added to the context at this stage. There is no limit as to the kind or amount of information we can associate with a context. We can, for example, include the time of day or the number of other open connections of that user, if we want them to be considered by our decision–making strategy. Once all the information is in place, we commit that context. The commit operation adds the context to the list of contexts the policy daemon needs to handle. After this, the application is blocked waiting
for the policy daemon reply. Accepting a connection works in a similar fashion. When accept(2)
enters the kernel, it blocks until an incoming connection request arrives. Upon receipt, we allocate a new context which we fill in similarly to the connect(2) case. The only difference is that we now also include the source address and port. The context is then enquired, and the process blocks waiting for a reply from the policy daemon. In the next section we discuss how messages are passed between the kernel and the policy daemon
To maximize the flexibility of our system and allow for easy experimentation, we decided to make the policy daemon a user level process. To support this architecture, we implemented a pseudo device driver, /sawan/policy, that serves as a communication path between the user–space policy daemon, and the modified system calls in the kernel. Our device driver supports the usual operations (open(2), close(2), read(2), write(2), and ioctl(2)). Furthermore, we have implemented the device driver as a loadable module. This increases the functionality of our system even more, since we can add functionality dynamically, without needing to recompile the whole kernel. If no policy daemon has opened /sawan/policy, no connection filtering is done. Opening the device activates the distributed firewall and initializes data structures. All subsequent connect(2) and accept(2) calls will go through the procedure described in the previous section. Closing the device will free any allocated resources and disable the distributed firewall. When reading from the device the policy daemon blocks until there are requests to be served. The policy daemon handles the policy resolution messages from the kernel, and writes back a reply. The write(2) is responsible for returning the policy daemons decision to the blocked connection call, and then waking it up. It should be noted that both the device and the associated messaging protocol are not tied to any particular type of application, and may in fact be used without any modifications by other kernel components that require similar security policy handling.
Finally, we have included an ioctl(2) call for "house–keeping". This allows the kernel and the policy daemon to re–synchronize in case of any errors in creating or parsing the request messages, by discarding the current policy context and dropping the associated connection.
The third and last component of our system is the policy daemon. It is a user level process responsible for making decisions, based on policies that are specified by some administrator and credentials retrieved remotely or provided by the kernel, on whether to allow or deny connections. Policies are initially read in from a file. It is possible to remove old policies and add new ones dynamically. In the current implementation, such policy changes only affect new connections. We will discuss how these changes can potentially be made to affect existing connections, if such functionality is required. Communication between the policy daemon and the kernel is possible, as we mentioned earlier, using the policy device. The daemon receives each request from the kernel by reading the device. The request contains all the information relevant to that connection. Processing of the request is done by the daemon using the KeyNote library, and a decision to accept or deny it is reached. Finally the daemon writes the reply back to the kernel and waits for the next request. While the information received in a particular message is application-dependent (in our case, relevant to the distributed firewall), the daemon itself has no awareness of the specific application. Thus, it can be used to provide policy resolution services for many different applications, literally without any modifications. When using a remote repository server, the daemon can fetch a credential based on the ID of the user associated with a connection, or with the local or remote IP address. A very simple approach to that is fetching the credentials via HTTP from a remote web server. The credentials are stored by user ID and IP address, and provided to anyone requesting them. If credential "privacy" is a requirement, one could secure this connection using IPsec or SSL. To avoid potential deadlocks,
To better explain the interaction of the various components in the distributed firewall, we discuss the course of events during two incoming TCP connection requests, one of which is IPsec–protected. The local host where the connection is coming is part of a distributed firewall, and has a local policy. In the case of a connection coming in over IPsec, the remote user or host will have established an IPsec Security Association with the local host using IKE. As part of the IKE exchange, a KeyNote credential is provided to the local host. Once the TCP connection is received, the kernel will construct the appropriate context. This context will contain the local and remote IP addresses and ports for the connection, the fact that the connection is protected by IPsec, the time of day, etc. This information along with the credential acquired via IPsec will be passed to the policy daemon. The policy daemon will perform a KeyNote evaluation using the local policy and the credential, and will determine whether the connection is authorized or not. In our case, the positive response will be sent back to the kernel, which will then permit the TCP connection to proceed. Note that more credentials may be provided during the IKE negotiation (for example, a chain of credentials delegating authority). If KeyNote does not authorize the connection, the policy daemon will try to acquire relevant credentials by contacting a remote server where these are stored. In our current implementation, we use a web server as the credential repository. In a large-scale network, a distributed/replicated database could be used instead. The policy daemon uses the public key of the remote user (when it is known, i.e., when IPsec is in use) and the IP address of the remote host as the keys to lookup credentials with; more specifically, credentials where the user’s public key or the remote host’s address appears in the Licensees field are retrieved and cached locally. These are then used in conjunction with the information provided by the kernel to re-examine the request. If it is again denied, the connection is ultimately denied.
One of the most often used term in case of network security and in particular distributed firewall is policy. It is essential to know about policies. A "security policy" defines the security rules of a system. Without a defined security policy, there is no way to know what access is allowed or disallowed.
A simple example for a firewall is
Allow all connections to the web server.
Deny all other access.
The distribution of the policy can be different and varies with the implementation. It can be either directly pushed to end systems, or pulled when necessary.
The hosts while booting up pings to the central management server to check whether the central management server is up and active. It registers with the central management server and requests for its policies which it should implement. The central management server provides the host with its security policies.
For example, a license server or a security clearance server can be asked if a certain communication should be permitted. A conventional firewall could do the same, but it lacks important knowledge about the context of the request. End systems may know things like which files are involved, and what their security levels might be. Such information could be carried over a network protocol, but only by adding complexity.
The push technique is employed when the policies are updated at the central management side by the network administrator and the hosts have to be updated immediately. This push technology ensures that the hosts always have the updated policies at anytime.
The policy language defines which inbound and outbound connections on any component of the network policy domain are allowed, and can affect policy decisions on any layer of the network, being it at rejecting or passing certain packets or enforcing policies at the application layer.
Many possible policy languages can be used, including file-oriented schemes similar to Firmato, the GUIs that are found on most modern commercial firewalls, and general policy languages such as KeyNote. The exact nature is not crucial, though clearly the language must be powerful enough to express the desired policy.
A lot of work has been done over the previous years in the area of firewalls. describe different approaches to host-based enforcement of security policy. These mechanisms depend on the IP addresses for access control, although they could potentially be extended to support some credential-based policy mechanism similar to what we describe in this paper.
We have discussed the concept of a distributed firewall. Under this scheme, network security policy specification remains under the control of the network administrator. Its enforcement, however, is left up to the hosts in the protected network. Security policy is specified using KeyNote policies and credentials, and is distributed (through IPsec, a web server, a directory-like mechanism, or some other protocol) to the users and hosts in the network. Since enforcement occurs at the endpoints, various shortcomings of traditional firewalls are overcome.