Here's my problem: I'm looking at someone's Postgresql based database application for the first time and trying to find what is causing certain warnings/errors in the system's logfile. I don't know anything about the database schema. I don't know anything about the source code. But I need to track down the problem.
I can easily search for string contents of text file based code like php and perl, using the UNIX command 'grep'; even for compiled binaries I can use use the UNIX commands 'find' and 'strings'.
My problem is that some of the text produced in the logfile comes from the database itself. Checking the error logfile for the database yields nothing useful as there are no problems with the queries used by the application.
What I would like to do is exhaustively search all of the columns and all of the tables of the database for an string. Is this possible, and how?
Thanks in advance for any pointers. The environment used is Postgresql 8.2, but it would be useful to know how to do this in other flavors of relational databases as well.
-
I am not familiar with Postgresql, but I would think that, like SQL Server, it has meta-data tables/views that describe the schema of the database (for SQL Server 2005+, I'd be referring you to sys.tables and sys.columns). The idea would be to generate a series of ad-hoc queries based on the table schema, each one finding matches in a particular table/field combination and pumping matches into a "log" table.
-
I've used variants of this in the past.
-
It may not be optimal, but since I already know how to grep a text file I would just covert the database to a text file and grep that. Converting the database to a text file in this case would mean dumping the data using pg_dump.
The quickest/easiest/most efficient way isn't always elegant...
Joshua Berry : Quick and dirty: I like it. This solution works well for those of us more familiar with regex than SQL queries and functions. thanks!
0 comments:
Post a Comment