-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does SecAuditLog support enabling Keepalive when sending logs over HTTPS? #3344
Comments
Hi @meguoe, sorry for the late response. Libmodsecurity3 uses The code itself is very simple: m_http_client.setRequestType("application/json");
m_http_client.setRequestBody(log);
m_http_client.download(m_audit->m_path1); As you can see, each transactions is a unique HTTP request (to the AuditLog endpoint). This short introduction was only to show the mechanism - and ask you why do you need Btw. the mentioned If you could explain why do you need that, we can consider to add that feature. |
Hi @meguoe, could you help us to solve your issue? |
I’ve integrated ModSecurity into my proxy forwarding platform, but when sending audit logs over HTTP, I’m seeing a lot of TIME_WAIT connections, which take up network resources. I want to optimize this by using Keepalive, similar to how Nginx upstream works—creating a fixed number of Keepalive connections to handle audit log transmissions more efficiently. |
Hi @meguoe,
thanks, now I see your issue. Just an idea: could you try to add to the source of libmodsecurity3 (see the relevant part link above) this line? curl_easy_setopt(curl, CURLOPT_FORBID_REUSE, 1L); For more information, please take a look at curl's manual. |
thanks! |
I see that the Keep-Alive parameters can be configured, but just adding these settings may not be enough because download() calls curl_easy_init() and curl_easy_cleanup() every time. |
I'm not sure that's a good solution. Assume the client is created only once in ModSecurity instance and all transactions use the same handler, and it uses Now try to figure out what happens if the server closes the connection after the configured timeout (we can't be sure that the WAF sends log within the configured timeout period). We can check the connection state before we send the log, and re-open it if it's already closed, but that will be a bit difficult. Moreover, what if the different vhosts send the logs to different logservers. In that case we should create as many handler as configured vhosts exist. I'm not saying that's impossible, but I'm afraid we don't have enough resources at the moment to implement this. If you want to try it, we can help, and after some time in testing we can add that to the master branch. |
Thanks for your reply! After some discussion, we have switched to using the Concurrent mode for logging. |
@airween FWIW, per RFC 2616 § 8.1.2, even a 4XX will still keep a conn alive ("...even after error responses from the server."). You could just periodically send a I strongly recommend prioritizing shared/persistent curl handler use; HTTPS is the only SecAuditLogType that avoids local disk I/O but its performance and resource usage is unnecessarily greatly hindered by opening a connection for each request. This has a significant effect for implementations handling a very large number of concurrent requests. |
Hi @nf-brentsaner,
thanks,
I'm not sure that solves the problem completely. But before we take a look at the problem, I'm not sure it's a good idea to expect any customer to implement their end point to accept any URI (just for keep the connection alive). Or we have to communicate that, but it's a bit weird for me (to write a small how-to about setting The problem is that what happens if the connection is closed by some different reason? Eg. network issue, the end point server admin's restart the server... etc?
Thanks for the idea, but unfortunately there are many other things on my list. But if any volunteer could help with this, I can merge that, of course. |
Hrm? There's no need for any extra configuration beyond whatever was currently configured on the remote end to accept the POST request. A GET request to a POST-only endpoint serves a Additionally, if the server is gracefully closing, it will send the client a response with This is far easier than I think you are making it out to be. You don't need to do any state management or anything of the sort on your end. The server will tell you it is closing the connection, and the server will accept any request to the endpoint as a valid keepalive - even if it is an invalid request for that endpoint. All you need to do on modsecurity's end is: 1.) send any request every nT period of time, regardless if a log event is to be sent or not That's it. On server restart, or network failure, or whatever, you still face the issue regardless of keepalive/persistent handler or not. There is quite literally no change in behavior on the client end, the only difference is you know sooner if the connection to the remote is failed because the keepalive ( "1.)" above) failed instead of a log send failed. You being concerned about this situation has the exact same weight as saying "what if the webserver is down when we try to send an event log?" because it is the exact same scenario, and can be recovered in the exact same way; either trigger a setup of the persistent handler again on next log event send, or trigger the setup again on a keepalive send. Does this make sense? |
ModSecurity version (and connector): ModSecurity v3.0.13 with nginx-connector v1.0.3
WebServer: nginx-1.27.2
OS (and distro): alpine:3.20.3
Does SecAuditLog support enabling Keepalive when sending logs over HTTPS?
The text was updated successfully, but these errors were encountered: