• Skip to main content

ColdPath

Security Professionals You Trust

  • Security Consultation
    • Information Security And Log Management
    • Malware Removal and Forensic Services
  • Articles
  • About
  • Contact

Security Technical Guides

Secure a VPS to Manage multiple Websites – Part I (Isolate Users and PHP)

August 13, 2020 by Tony Perez Leave a Comment

If you are an agency building websites for your customers, and managing those sites on a Virtual Private Server (VPS), this post is for you.

While working on a recent forensics case, we encountered one of the most egregious mistakes we see with organizations managing multiple websites on one server.

All websites were owned by the default web server user and group.

[Read more…] about Secure a VPS to Manage multiple Websites – Part I (Isolate Users and PHP)

Filed Under: Security Technical Guides

Leverage OSSEC as a Syslog Client or Server

May 19, 2020 by Tony Perez Leave a Comment

Open Source Security (OSSEC) is a Host-Based Intrusion Detection System (HIDS) that allows you to quickly collect, analyze and correlate events across your entire infrastructure. It can be deployed on any endpoint, from network based tools (e.g., routers, switches, etc..) to end points (e.g., servers, desktop, laptops, etc..).

System Logging protocol, (Syslogs), is a mechanism used to collect, package and send data from a client machine to a server machine. This server machine is typically a Syslog Server.

While helping a customer think through their logging solutions and requirements, we realized that many OSSEC users fail to realize that OSSEC has the ability to function as both a Syslog client and Syslog server.

This is a technical guide that explains when a deployment might require this type of configuration, and how to extend your OSSEC deployment to fully capitalize on the platforms capabilities.

Accounting for Busy Nodes / Networks

Leveraging OSSEC as a syslog client allows you to divide activity across different networks, while still leveraging the power of OSSEC.

Imagine a world where you have a different networks (Zone 1 and Zone 2). Zone 1 is extremely busy, generates in excess of 10 million requests a day. Zone 2 is quieter, generating a few hundred requests a day. Maybe Zone 1 is your web environment, leveraging complex configurations that account for load balancers, databases, web servers, etc.. Zone 2, however, is a cluster of desktop / notebook machines.

Example Network with Different System Boundaries (Zones)

The task is to collect and aggregate all the data into a centralized manager so that you can perform your duties.

One option might be to deploy a normal Agent / Server deployment that would have all events consumed into one manager. Another, less used option, would be to deploy two Agent / Server configurations, where one Manager is subservient to the Master.

There are a couple different benefits of this deployment:

  • Easier management of agents (allows you to group devices and data);
  • A network / device that is yielding excessive logs, this configuration would pre-parse locally to parse data that matters to the master;
  • Centralized control of all networks (assume you have multiple networks you’re managing).

This configuration allows you to isolate, process, and parse the information that makes sense.

In the example above, Zone 1 could have its own Manager that collects, analyzes and correlates activity across the network, and it can be done independent to what is happening on Zone 2. The manager in Zone 1 can then send the processed data to the Master manager. This ensure that you’re accurately collecting, and storing the event data that matters, while be responsible and cognizant of the potential load on your network.

This configuration is extremely powerful, and helps you be more efficient in the management of your event data. Below I’ll share some tips on how to make this work.

OSSEC: Syslog Client and Manager

Any OSSEC deployment (e.g., Agent, Manager, Hybrid) can function as a Syslog client, but only the manager can function as a Syslog manager. This is an important distinction.

Convert OSSEC Install Into a Syslog Client

First, you want to enable syslog client on your OSSEC install:

/var/ossec/bin/ossec-control enable client-syslog

This will enable the syslog client in OSSEC. In the example above, we did this on the OSSEC manager.

Once enabled, update the OSSEC configuration file to make sure it knows where to send the data:

vim /var/ossec/etc/ossec.conf 

Add a new section in the configuration file for syslog.

<syslog_output>
  <level>6</level>
  <server>[public IP of manager]</server>
  <port>1515</port>
</syslog_output>

This is going to allow you to take control of what gets sent to the Master server. Above I use three key options: level, server, and port.

  • level -> defines what alert level should be sent to the manager, extremely powerful as it allows you to reduce the noise being sent to your master. In the example above, I select to only send alerts that are level 6 or higher.
  • server -> is the public IP of your Master.
  • port -> The port you want to send the information too. The default port for syslog with OSSEC is 1515.

What’s really unique about this is that it is going to allow you to leverage your OSSEC alerts, which have already processed and built intelligence from the events, and send that to your master in tact.

Last step is to reboot OSSEC on this end poing:

/var/ossec/bin/ossec-control restart

To verify the data is being sent, do a simple TCPDUMP and track the outbound requests:

tcpdump -i eth0  -nnn -s 0 -A udp port 1515

OSSEC Manager as a Syslog Manager

On the Master manager, configure it to accept the new syslog files.

To configure the Manager as the Syslog manager simply enable syslog connections on the manager. You do this by editing the OSSEC configuration file:

vim /var/ossec/etc/ossec.conf 

Create a new remote entry using the following options:

<remote>
  <connection>syslog</connection>
  <port>1515</port>
  <allowed-ips>[public IP of syslog agent]</allowed-ips>
</remote>

This is going to allow the manager to consume the events from your syslog client appropriately. In this instance, another OSSEC manager. Above, I used three options:

  • connection -> Tells OSSEC to expect syslog data.
  • port -> The port to expect the events. The default port for syslog with OSSEC is 1515.
  • allowed-ips -> Identifies where the events are coming from, needs to be the public IP.

You can confirm things are functioning by running a similar TCPDUMP on the manager, or you can parse the the ossec.log file for details. Here is one way:

cat /var/ossec/logs/ossec.log | grep syslog

You should see something similar to the following:

2020/05/18 20:59:24 ossec-remoted: Remote syslog allowed from: ‘[public IP of the syslog agent]‘

2020/05/18 20:59:30 ossec-logcollector(1950): INFO: Analyzing file: ‘/var/log/syslog‘.

When it’s all done, you will see an entry in your agent control for the new syslog connection.

/var/ossec/bin/agent_control -lc

It will be displayed like this:

List of syslog-based sources: ID: na, Name: [hostname]->[public IP], IP: [public IP], Syslog-based Active


OSSEC is one of the most powerful open-source solutions available to any organization to help them with their collection, aggregation and analysis of event data. What it lacks in aesthetics, it easily makes up for in its economics, functionality and relatively low footprint. .

If you’re managing your own OSSEC deployment, need help thinking through its deployment, or need help sustaining your existing deployment ColdPath is here to help you. Contact us via our Contact Page, or send us an email at info@coldpath.net.

Filed Under: Security Technical Guides

Recovering Servers Post-Hack

April 22, 2020 by Tony Perez Leave a Comment

After a hack, should an organization restore its servers from a new OS or from the backup?

We were recently helping a hacked organization and this question was asked. The organization had been given two very different opinions, and wanted to know what we would do. The recommendations they had received came from two polar opposite ends of the spectrum.

One group said, “yes, absolutely, you must use a new OS and migrate the data,” while others said, “you’ll be fine to restore from a backup, just be sure the patch the vulnerability.”

Our response was a bit different – it depends.

This post expands on why our recommendation could be perceived to be ambiguous, but hopefully provides a framework that you can leverage when your organization is presented with the same question.

Balance Security Recommendations with Business Needs

To understand the recommendation, you need context:

This was a decent sized business with close to 100 servers compromised. The bad actor was able to deploy ransomware, and was holding the organization hostage for a ransom. These servers were the core of the company, it included all business functions (e.g., HR, Finance, R&D, Operations, Marketing, Commerce). The specific size in terms of head count, or revenue, doesn’t matter, but it’s important to note that they did not have a dedicated security team (but there did exist a person whose part-time job was security) and lacked basic security knowledge.

What we learned through the process is that the organization had no way of knowing a) the vector used to exploit the environment, or b) when the attack actually started.

What this tell us is that you have to assume worst-case, and worst case would undeniably require you to start with fresh OS installs. The problem, however, is can the business support a move like this?

This reminds us of our jobs as a security professional; we exist to help the organization achieve its business objectives, not the other way around. We help organization identify risk, and help implement controls to help mitigate those risk so that the business can continue to do its work. Post-compromise, that objective doesn’t change, in fact it expands. We not only have to identify and eradicate the issue, but also have a responsibility to ensure business continuity.

To appropriately give an answer on what needs to be done, you have to understand some very basic facts about an organization:

  • Does the organization have a security team?
  • Does the organization have a team required to spin up the new environment?
  • Does the organization have the technical expertise?
  • What are the specific configurations difference between the old environment and new environment?
  • What kind of servers are we talking about?
  • How long would it take (in hours / days)?
  • What is the real confidence level of the estimate?
  • Can the business afford to be down the projected time frame?

We have yet to meet an organization that meets these criteria that has had solid answers to any of these questions. And so, how you approach the problem must take these insights into consideration.

There is what they must do, and then there is what they can do. Our job in security is to help figure out the “how”.

Concepts That Establish a Foundation

An organization is often unable to answer the most basic questions post-compromise. This means that, while we can all agree that the desired end state should be fresh OS installs, the big variable we need to account for is time.

In a deployment like the one this organization was in, what is needed is a practical, phased, approach to how, and when, they go live. Which is why our recommendation was more grey than black and white.

The recommendation we provided was that they wanted to do both, but in a phased manner and it was built on three concepts:

CategorizationA basic process of putting things into “categories” “groups” “classes” or some other similar bucket.
Functional IsolationStems from the world of electrical circuitry where you isolate components. We use it when talking about networks and systems to represent the same thing – isolate the function of a network, isolate the function of a server (i.e., a web server shouldn’t be print server, shouldn’t be an email server, let it serve one function).
System BoundariesThis is a term used in system design meant to represent the logical lines (“boundaries”) that help separate different environments.

The biggest issue the organization is faced with is it doesn’t know how the attack happened.

In hindsight, they were able to identify a series of Indicators of Compromise (IoC) that pre-date the actual hack by a couple of weeks. This conforms to what we know of the tactics, techniques and procedures (TTPs) employed by bad actors, but falls short of providing a definitive answer around what happened. The organization is also limited in their forensic ability, it lacked any logging / activity tracking, and their backups were part of the compromised servers. Each of these contributes to the general consensus, that yes, they do want fresh OS installs.

It forces you to assume worst-case. But the business has a need to get operational, quickly, ensuring business continuity. This is where you must balance desired end-state with reality, learning to balance security with the business needs.

Fortunately, security professionals can help mitigate the risk.

Mitigating Exposure

The number one risk the organization must be aware of is that they might suffer a compromise the minute they spin back up. Bad actors are known to deploy their payloads within a network that allow them to bypass access mechanisms and defensive controls. This is the reality you must deal with when you don’t know how a hack happened.

As such, your focus is not so much about preventing the hack but reducing the potential impact if, when, it happens again.

This can be mitigated by using a phased approach, one in which they establish new system boundaries that are functionally isolated and based on a hierarchical categorization scheme.

1. Categorize the Servers

Servers should be categorized based on their impact to the business.

A well recognized categorization scheme is: Low (L), Moderate (M) and High (H).

The most difficult part of this process is assessing what belong in what category. The following calculation should be employed:

Security Category (SC) = 
{(confidentiality, impact), (integrity, impact), (availability, impact)}

This model makes use of the security triad – Confidentiality, Integrity and Availability (CIA) that speaks to the three core security objectives

Example of how it can be applied:

A department file server contains both sensitive personnel information and routine administrative information. The following are possible security categories for the information on the file server.

SC personnel information = 
{(confidentiality, M), (integrity, M), (availability, L)}
SC administrative information = 
{(confidentiality, L), (integrity, L), (availability, L)

The resulting categorization of the file server would be the highest level of categorization for each type of information or data on it.

SC file server = 
{(confidentiality, M), (integrity, M), (availability, L)}

Which would mean that the file server is categorized as Moderate.

This is probably the most consuming part of the process.

2. Reduce Function Where Possible

The categorization process should shed light on what each server is doing. This is an opportunity to identify those that should be repurposed to reduce its functionality where possible, i.e., system functional isolation.

3. Establish New Boundaries

Once you have your servers categorized, you can establish new system boundaries. These boundaries are basic logical groupings of each category and help you better understand your environment.

4. Isolate System Boundaries

Once you have those boundaries, you must take the time to perform two critical functions (i.e., network functional isolation):

  • Review access controls;
  • Review user groups;

The biggest mistake organizations make that allow for this level of compromise is a systematic failure of how roles, the associated responsibilities, and access are managed.

The new server categories present an opportunity to review how its roles / responsibilities are designed and implemented. It should force you to ask tough questions around who needs access to what systems. It should also force you to asses how information flows between the various system boundaries.

A basic construct would ensure that communication between systems is not allowed across different levels at a minimum, but should introduce additional controls (e.g., firewalls, access constraints, MFA) within boundaries as well.

This is also a great time to spend time on access control itself. What level of authentication is being employed?

The more critical a system, the important it is to ensure appropriate checks and balances are introduced. This is where things like Multi-Factor Authentication (MFA) become extremely important. If you’re interested to learn more about MFA, I encourage you to read the series of articles prepared by my friend Jesper Johansson on the subject.

5. Phased Deployment

The first three steps help you design and implement a plan of action based on the first four steps of this process.

This approach highlights the sequencing of events. It identifies how specific environments should go live.

Servers that are deemed critical, based on your categorization calculation, should not go live with a new OS, while those that are deemed less impactful to the business have the ability to go live with backups.

This approach provides your business a very practical system that helps balance the potential risk and the need for business continuity.

Working in a World of Unknowns

As security professionals the business does not serve us, we serve it. We must educate and inform those responsible for the business, and it’s our responsibility to adapt and help come up with creative solutions that help align to the ultimate objective – run the business.

We must not be rigid in our thinking, instead we should tailor our ideas accordingly. In the world of security we will rarely have enough information.

The scenario above helps illustrate what this means.

Both consultants were right in the desired outcomes, but wrong in the absolute nature of the proposed sequence of events. Taking into consideration the full scope of the compromise, and state of the business, helps us create unique approaches that helps achieve the same outcome over a period of time, while also helping to reduce the risk of a new compromise.

When we work in a world of unknowns all we can do is make the best possible decisions under the current circumstances.

Filed Under: Security Technical Guides Tagged With: incident response

Copyright © 2025 · Infinity Pro on Genesis Framework · WordPress · Log in