Why using Log to database instead of log files ?

Why using Log to database instead of log files ?

Asked on October 27, 2018 in Database.
Add Comment


  • 7 Answer(s)

    Here are some of advantages and disadvantages regarding to the subject of log-files vs log-db (relational):

    • log-files square measure quick, reliable, and ascendable (At least I even have detected Yahoo! makes serious uses of log files for his or her click following analytics).
    • log-files square measure simple for sys-admin to take care of.
    • log-files will be terribly versatile since you’ll be able to write virtually something thereto.
    • log-files needs serious parsing and probably a map-reduced sort of setup for data-extraction.
    • log-db structures square measure tons nearer to your application, creating some feature’s turn time tons shorter. this may be a blessing or a curse. most likely a curse within the long-standing time since we may presumably find ourself with a extremely coupled application and analytic code base.
    • log-db will cut back work noises and redundancies since log-files square measure insert solely wherever as log-db provides you the flexibility to try to to update and associated-insert (normalization if we dare).
    • log-db will be quick and ascendable too if you accompany information partitioning and/or multi-log knowledgebases (rejoin data via downstream replications)
    Answered on October 27, 2018.
    Add Comment

    If we wish to vary the default work behavior, merely produce a custom lumberjack object that answer all the Rails lumberjack method:

    • add
    • debug, warn, error, info, fatal, unknown

    we can replace the default logger on the every base class as we want to customize it.

    ActiveRecord::Base.logger = YouLogger.new
    
    
    Answered on October 27, 2018.
    Add Comment

          This is comparatively easy to line up, and provides you all the plain benefits of having the ability to research your logs during a decibel , particularly for tracing what a user was doing simply before a blunder.

          However,we have got to guard against sql injection, buffer overflows, and alternative security problems within the work program.

    Answered on October 27, 2018.
    Add Comment
    5

    Either works. It’s up to your preference.

    We have one central database where ALL of our apps log their error messages. Every app we write is set up in a table with a unique ID, and the error log table contains a foreign key reference to the AppId.

    This has been a HUGE bonus for us in giving us one place to monitor errors. We had done it as a file system or by sending emails to a monitored inbox in the past, but we were able to create a fairly nice web app for interacting with the error logs. We have different error levels, and we have an “acknowledged” flag field, so we have a page where we can view unacknowledged events by severity, etc.,

    Answered on January 14, 2019.
    Add Comment

    he rationale behind writing to file system that if an external infrastructure dependency like network, database, or security issue prevents you from writing remotely, that at least you have a fall back if you can recover data from the web server’s hard disk (something akin to a black box in the airline industry).

    In fact, enterprise log managers like Splunk can be configured to scrape your local server log files (e.g. as written by log4net, the EntLib Logging Application Block, et al) and then centralize them in a searchable database, where data logged can be mined, graphed, shown on dashboards, etc.

    But from an operational perspective, where it is likely that you will have a farm of web servers, and assuming that both the local file system and remote database logging mechanisms are working, the 99% use case for actually trying to find anything in a log file will still be via the central database (ideally with a decent front end system to allow you to query, aggregate and even graph the log data).

    Original Answer

    If you have the database in place, I would recommend using this for audit records instead of the filesystem.

    Rationale:

    • typed and normalized classification of data (severity, action type, user, date ...)
    • it is easier to find audit data (select ... from Audits where ...) vs Grep
    • it is easier to clean up (e.g. Delete from Audits where = Date ...)
    • it is easier to back up

    The decision to use existing db or new one depends – if you have multiple applications (with their own databases) and want to log / audit all actions in all apps centrally, then a centralized db might make sense.

    Since you say you want to audit user activity, it may would make sense to audit in the same db as your users table / definition (if applicable).

    Answered on January 14, 2019.
    Add Comment

    he rationale behind writing to file system that if an external infrastructure dependency like network, database, or security issue prevents you from writing remotely, that at least you have a fall back if you can recover data from the web server’s hard disk (something akin to a black box in the airline industry).

    In fact, enterprise log managers like Splunk can be configured to scrape your local server log files (e.g. as written by log4net, the EntLib Logging Application Block, et al) and then centralize them in a searchable database, where data logged can be mined, graphed, shown on dashboards, etc.

    But from an operational perspective, where it is likely that you will have a farm of web servers, and assuming that both the local file system and remote database logging mechanisms are working, the 99% use case for actually trying to find anything in a log file will still be via the central database (ideally with a decent front end system to allow you to query, aggregate and even graph the log data).

    Original Answer

    If you have the database in place, I would recommend using this for audit records instead of the filesystem.

    Rationale:

    • typed and normalized classification of data (severity, action type, user, date ...)
    • it is easier to find audit data (select ... from Audits where ...) vs Grep
    • it is easier to clean up (e.g. Delete from Audits where = Date ...)
    • it is easier to back up

    The decision to use existing db or new one depends – if you have multiple applications (with their own databases) and want to log / audit all actions in all apps centrally, then a centralized db might make sense.

    Since you say you want to audit user activity, it may would make sense to audit in the same db as your users table / definition (if applicable).

    Answered on February 20, 2019.
    Add Comment

    Audit logs must ensure full traceability of operations over longer time for audit purposes, with the goal of fully justifying the content of your database.

    In some cases (e.g. financial applications) these logs may have to ensure compliance with legal requirements such as retention (in some countries for 10 years) or unalterability. As these logs have to justify the content of the db at application level, it’s common practice to store them in the db, where access can be controlled to avoid unauthorized alteration.

    Other logs, like monitoring logs or security logs frequently have to cope with performance and volume constraints. These are generally written to a file because it’s faster to write (no transaction management overhead) easier to archive offline, and easier to integrate with external monitoring SIEM tools.

    It shall be noted that, while these kind of logs can be used to demonstrate reliability of audit logs (e.g. no unauthorized access), they generally have shorter retention constraints (for example between 6 month and 2 years for law enforcement purposes fro telecommunication logs) if any constraint at all.

    Answered on February 20, 2019.
    Add Comment


  • Your Answer

    By posting your answer, you agree to the privacy policy and terms of service.