Part 2: Configure Logging and Alerting on Google Cloud Logging— Audit VM and Cloud PostgreSQL on Google Cloud Platform

Ari Sukarno
6 min readSep 21, 2022

--

Let’s continue Part 2 for our Audit :)

After we succeed to install and enable the DB extension, we will continue to configure the Ops Agent, view the logs, and configure the alert in Google Cloud Logging.

  1. Configure Ops Agent for VM Logs

At the VM level, we can send whatever log inside of VM to Google Cloud Logging. It’s possible to create custom configure on /etc/google-cloud-ops-agent/config.yaml [1]. The format configuration consists of three main sections:

a. receivers: this element describes the data collected from log files

b. processor: this optional element describes how the agent can modify the collected information.

c. service: this element links receivers and processors together to create data flows, called pipelines. The service element contains a pipelines element, which can include multiple pipeline definitions.

As our audit that we would like to implement in Part 1, we are defining the Ops Agent configuration as below:

logging:
receivers:
# User list receivers
OS_User_List:
type: files
include_paths: [/etc/passwd]
# OS User logs receivers
OS_User_Activity:
type: files
include_paths: [/var/log/cmdline]
service:
pipelines:
# Auth logs pipelines
user_list_pipeline:
receivers: [OS_User_List]
# OS user logs pipelines
OS_User_Activity_Pipeline:
receivers: [OS_User_Activity]

As you can see we’ll read the logs from cmdline for tracking the user activity. By using the cmdline audit, we can achieve all of our audits using it. Before jumping to the next step please enable the cmdline logs by following this article: https://confluence.atlassian.com/confkb/how-to-enable-command-line-audit-logging-in-linux-956166545.html

*make sure the logs were produced in /var/log/cmdline, you can check by running the command “cat /var/log/cmdline”

We can customize as we need, and the receivers can read such as files, syslogs, windows_event_logs, etc as GCP mentioned in their official documentation [2]. Edit the configuration:

sudo vim /etc/google-cloud-ops-agent/config.yaml

Save using ESC + :wq if you are using vim.

After we edit the configuration, it’s required to restart the Ops Agent.

sudo service google-cloud-ops-agent restart

After we configure the Ops Agent we can see the logs in Cloud Logging for the VM as well as for Cloud Postgresql. Actually, for Cloud SQL after enabling the extension the logs will be sent to the Cloud Logging.

2. View the Logs in Cloud Logging

Go to the Google Cloud Logging by searching “Logging” in the search bar on the GCP console and all of the receiver’s names on the Ops Agent will be reflected on the log name:

a. VM logs

We can see the log by typing a query directly or by clicking the resource. Below we demonstrate to see the VM logs as this picture:

Then select the Log Name:

We are trying to do some commands in the VM and it should be produced in the Cloud Logging:

So we are successful to send the VM logs into Google Cloud Logging.

b. Cloud PostgreSQL Logs

As in the previous step, we can see the Cloud PostgreSQL by defining the resource name and choose the cloud sql name accordingly.

For this operation, I try to run command “CREATE USER usertest” inside of the Cloud PostgreSQL and we’ll se the log in the protoPayload.request section.

We also success to get the Cloud PostgreSQL logs in the Cloud Logging. Then we can continue to implement the alerting.

3. Configure Alerting

As our audit, also we need to alert when the activity has happened and inform by email. Actually, what we have to do is we can create the query in which the condition is the same as the activity. For example, USER CREATED, so we need to run the query in which the result is all activity of the USER CREATED. I have created the query as the audit needs:

a. OS Logs

  • New User Created
resource.type="gce_instance" 
resource.labels.instance_id="your-instance-id"
resource.labels.project_id="your-project-id"
logName="projects/your-project-id/logs/OS_User_Activity"
"useradd" OR "adduser"
  • User Updated
resource.type="gce_instance" 
resource.labels.instance_id="your-instance-id"
resource.labels.project_id="your-project-id"
logName="projects/your-project-id/logs/OS_User_Activity"
"usermod"
  • User Deleted
resource.type="gce_instance" 
resource.labels.instance_id="your-instance-id"
resource.labels.project_id="your-project-id"
logName="projects/your-project-id/logs/OS_User_Activity"
"userdel" OR "deluser"

b. Cloud PostgreSQL Logs

  • New User Created
resource.type="cloudsql_database" 
resource.labels.project_id="your-project-id"
resource.labels.database_id="your-project-id:your-cloud-sql-name"
protoPayload.request.command="CREATE ROLE"
  • User Updated
resource.type="cloudsql_database" 
resource.labels.project_id="your-project-id"
resource.labels.database_id="your-project-id:your-cloud-sql-name"
protoPayload.request.command="ALTER ROLE"
  • User Deleted
resource.type="cloudsql_database" 
resource.labels.project_id="your-project-id"
resource.labels.database_id="your-project-id:your-cloud-sql-name"
protoPayload.request.command="DROP ROLE"
  • Delete Row
resource.type="cloudsql_database" 
resource.labels.project_id="your-project-id"
resource.labels.database_id="your-project-id:your-cloud-sql-name"
protoPayload.request.command="DELETE"
  • Insert Tables
resource.type="cloudsql_database" 
resource.labels.project_id="your-project-id"
resource.labels.database_id="your-project-id:your-cloud-sql-name"
protoPayload.request.command="INSERT"
  • Drop Tables
resource.type="cloudsql_database" 
resource.labels.project_id="your-project-id"
resource.labels.database_id="your-project-id:your-cloud-sql-name"
protoPayload.request.command="DROP TABLE"

As you can see, what’s the different for each query is just a little bit on the real activity what happened in the VM or Cloud PostgreSQL. For example in VM logs we just add the string of the activity (eg. adduser) and Cloud PostgreSQL it produced on the protoPayload then we can defined the activity. If we run the query for example CREATE USER so the result of the query is as we need:

c. Alerting using Cloud Monitoring

After we’ve defined the query, we can create the alert from the query by clicking the “create alert” on the console:

Specify the name, query (based on what we create), alert frequency, notification channel (email, you can create to other platforms such as slack, SMS, webhook, etc) then Save.

To see the Alert policy, we can go to the Monitoring -> Alerting -> View Alert:

We can create the other Alert policy on the VM or DB level by specifying the query then create alert and so on.

d. Testing Alert

We’ll try to create new user in DB level again and see what happened:

Create new user on Cloud PostgreSQL
Alerting in Email

As you can see, the activity the user created was alerted in the emails and you can read more of the activity by clicking the “View Incident”.

With this audit scenario, the auditor or security team can easily identify if a sensitive event happened on their system either VM or Database. Yeah, finally we’ve done creating logs and making the alerting system to email.

Let’s continue to Part 3 to make the report… :)

References:

[1] https://cloud.google.com/logging/docs/agent/ops-agent/configuration

[2] https://cloud.google.com/logging/docs/agent/ops-agent/configuration#logging-receivers

--

--

Ari Sukarno
Ari Sukarno

Written by Ari Sukarno

Cloud / DevOps / Site Reliability Engineer Things

No responses yet