Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does SecAuditLog support enabling Keepalive when sending logs over HTTPS? #3344

Closed
meguoe opened this issue Feb 28, 2025 · 11 comments
Closed

Comments

@meguoe
Copy link

meguoe commented Feb 28, 2025

ModSecurity version (and connector): ModSecurity v3.0.13 with nginx-connector v1.0.3
WebServer: nginx-1.27.2
OS (and distro): alpine:3.20.3

Does SecAuditLog support enabling Keepalive when sending logs over HTTPS?

@airween
Copy link
Member

airween commented Mar 3, 2025

Hi @meguoe,

sorry for the late response.

Libmodsecurity3 uses curl to send AuditLogs through HTTP(S).

The code itself is very simple:

    m_http_client.setRequestType("application/json");
    m_http_client.setRequestBody(log);
    m_http_client.download(m_audit->m_path1);

As you can see, each transactions is a unique HTTP request (to the AuditLog endpoint). This short introduction was only to show the mechanism - and ask you why do you need Keepalive feature to do that?

Btw. the mentioned m_http object is implemented here. As you can see every download() call creates a new curl object, and at the end of the function it's cleaned up. So may be curl reuse the existing connection (without Connection: Keep-alive; header), I think based on the code we can say no, libmodsecurity3 does not use Keepalive and there is no chance to set up in config.

If you could explain why do you need that, we can consider to add that feature.

@airween
Copy link
Member

airween commented Mar 10, 2025

Hi @meguoe, could you help us to solve your issue?

@meguoe
Copy link
Author

meguoe commented Mar 11, 2025

I’ve integrated ModSecurity into my proxy forwarding platform, but when sending audit logs over HTTP, I’m seeing a lot of TIME_WAIT connections, which take up network resources. I want to optimize this by using Keepalive, similar to how Nginx upstream works—creating a fixed number of Keepalive connections to handle audit log transmissions more efficiently.

@airween
Copy link
Member

airween commented Mar 11, 2025

Hi @meguoe,

I’ve integrated ModSecurity into my proxy forwarding platform, but when sending audit logs over HTTP, I’m seeing a lot of TIME_WAIT connections, which take up network resources. I want to optimize this by using Keepalive, similar to how Nginx upstream works—creating a fixed number of Keepalive connections to handle audit log transmissions more efficiently.

thanks, now I see your issue.

Just an idea: could you try to add to the source of libmodsecurity3 (see the relevant part link above) this line?

curl_easy_setopt(curl, CURLOPT_FORBID_REUSE, 1L);

For more information, please take a look at curl's manual.

@meguoe
Copy link
Author

meguoe commented Mar 12, 2025

thanks!

@meguoe meguoe closed this as completed Mar 12, 2025
@meguoe meguoe reopened this Mar 12, 2025
@meguoe
Copy link
Author

meguoe commented Mar 12, 2025

I see that the Keep-Alive parameters can be configured, but just adding these settings may not be enough because download() calls curl_easy_init() and curl_easy_cleanup() every time.
curl_easy_setopt(curl, CURLOPT_TCP_KEEPALIVE, 1L); curl_easy_setopt(curl, CURLOPT_TCP_KEEPIDLE, 60L); curl_easy_setopt(curl, CURLOPT_TCP_KEEPINTVL, 30L);
To make Keep-Alive effective, we need to maintain a persistent CURL* handle instead of reinitializing and cleaning it up on every request.

@airween
Copy link
Member

airween commented Mar 12, 2025

I'm not sure that's a good solution.

Assume the client is created only once in ModSecurity instance and all transactions use the same handler, and it uses Keep-Alive feature of HTTP. In this case the server which handles the logs also have a timeout for keep-alive connections.

Now try to figure out what happens if the server closes the connection after the configured timeout (we can't be sure that the WAF sends log within the configured timeout period).

We can check the connection state before we send the log, and re-open it if it's already closed, but that will be a bit difficult.

Moreover, what if the different vhosts send the logs to different logservers. In that case we should create as many handler as configured vhosts exist.

I'm not saying that's impossible, but I'm afraid we don't have enough resources at the moment to implement this. If you want to try it, we can help, and after some time in testing we can add that to the master branch.

@meguoe
Copy link
Author

meguoe commented Mar 13, 2025

Thanks for your reply! After some discussion, we have switched to using the Concurrent mode for logging.

@meguoe meguoe closed this as completed Mar 13, 2025
@nf-brentsaner
Copy link

@airween FWIW, per RFC 2616 § 8.1.2, even a 4XX will still keep a conn alive ("...even after error responses from the server.").

You could just periodically send a GET / to the log collector URL. That solves the server closing situation you describe. (The server also sends a Connection header if it is closing the keepalive, so you'd know when to re-open regardless.)

I strongly recommend prioritizing shared/persistent curl handler use; HTTPS is the only SecAuditLogType that avoids local disk I/O but its performance and resource usage is unnecessarily greatly hindered by opening a connection for each request. This has a significant effect for implementations handling a very large number of concurrent requests.

@airween
Copy link
Member

airween commented Mar 27, 2025

Hi @nf-brentsaner,

@airween FWIW, per RFC 2616 § 8.1.2, even a 4XX will still keep a conn alive ("...even after error responses from the server.").

thanks,

You could just periodically send a GET / to the log collector URL. That solves the server closing situation you describe. (The server also sends a Connection header if it is closing the keepalive, so you'd know when to re-open regardless.)

I'm not sure that solves the problem completely.

But before we take a look at the problem, I'm not sure it's a good idea to expect any customer to implement their end point to accept any URI (just for keep the connection alive). Or we have to communicate that, but it's a bit weird for me (to write a small how-to about setting SecAuditLog through HTTP(S)).

The problem is that what happens if the connection is closed by some different reason? Eg. network issue, the end point server admin's restart the server... etc?

I strongly recommend prioritizing shared/persistent curl handler use; HTTPS is the only SecAuditLogType that avoids local disk I/O but its performance and resource usage is unnecessarily greatly hindered by opening a connection for each request. This has a significant effect for implementations handling a very large number of concurrent requests.

Thanks for the idea, but unfortunately there are many other things on my list. But if any volunteer could help with this, I can merge that, of course.

@nf-brentsaner
Copy link

nf-brentsaner commented Mar 27, 2025

But before we take a look at the problem, I'm not sure it's a good idea to expect any customer to implement their end point to accept any URI (just for keep the connection alive). Or we have to communicate that, but it's a bit weird for me (to write a small how-to about setting SecAuditLog through HTTP(S)).

Hrm? There's no need for any extra configuration beyond whatever was currently configured on the remote end to accept the POST request. A GET request to a POST-only endpoint serves a 405 error, which you will note is a 4xx. Thus it keeps the connection alive, even if it's an error. Per RFC. I am saying you can literally just send a GET to whatever URL is configured for SecAuditLog for SecAuditLogType == HTTPS, even if that URL is not configured to receive/handle a GET. Per RFC, it will renew the keepalive timer.

Additionally, if the server is gracefully closing, it will send the client a response with Connection: close header.

This is far easier than I think you are making it out to be. You don't need to do any state management or anything of the sort on your end. The server will tell you it is closing the connection, and the server will accept any request to the endpoint as a valid keepalive - even if it is an invalid request for that endpoint. All you need to do on modsecurity's end is:

1.) send any request every nT period of time, regardless if a log event is to be sent or not
2.) check the response for a Connection: close header
2.) a.) If so, consider that connection invalid/closed administratively.

That's it.

On server restart, or network failure, or whatever, you still face the issue regardless of keepalive/persistent handler or not. There is quite literally no change in behavior on the client end, the only difference is you know sooner if the connection to the remote is failed because the keepalive ( "1.)" above) failed instead of a log send failed. You being concerned about this situation has the exact same weight as saying "what if the webserver is down when we try to send an event log?" because it is the exact same scenario, and can be recovered in the exact same way; either trigger a setup of the persistent handler again on next log event send, or trigger the setup again on a keepalive send.

Does this make sense?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants