加入收藏 | 设为首页 | 会员中心 | 我要投稿 李大同 (https://www.lidatong.com.cn/)- 科技、建站、经验、云计算、5G、大数据,站长网!
当前位置: 首页 > 大数据 > 正文

关于logminer挖掘不出业务数据操作的解决

发布时间:2020-12-14 03:54:27 所属栏目:大数据 来源:网络整理
导读:Problem Description: There is a requirement that found who update my business data,So I use Logminer to dig the archivelog file,But I cannot got the useful information(See the attachment). there are a lot of information such as : 1.commit

Problem Description:

There is a requirement that found who update my business data,So I use Logminer to dig the archivelog file,But I cannot got the useful information(See the attachment).
there are a lot of information such as :
1.commit;
2.set transaction read write;
3.update "SYS"."OBJ$" set "OBJ#" = '189213',"DATAOBJ#" = '189213',"TYPE#" = '2',"CTIME" = TO_DATE('10-4月 -12','DD-MON-RR'),"MTIME" = TO_DATE('28-12月-12',"STIME" = TO_DATE('28-12月-
12',"STATUS" = '1',"FLAGS" = '0',"OID$" = NULL,"SPARE1" = '6',"SPARE2" = '3' where "OBJ#" = '189213' and "DATAOBJ#" = '189213' and "TYPE#" = '2' and "CTIME" = TO_DATE('10-4月 -12','DD-MON-RR') and "MTIME" = TO_DATE('24-5月 -12','DD-MON-RR') and "STIME" = TO_DATE('24-5月 -12','DD-MON-RR') and "STATUS" = '1' and "FLAGS" = '0' and "OID$" IS NULL and "SPARE1" = '6' and "SPARE2" ='2' and ROWID = 'AAAAASAABAAAQUGAAk';

but they don't have any information about business data?

Analysis:
1.the commands i use:
EXECUTE DBMS_LOGMNR.ADD_LOGFILE ('K:archARC17272_0694084928.001',DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
set lines 200
SELECT USERNAME,SQL_REDO,SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE USERNAME='';


2:
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
the result is :NO


Supplemental logging should not affect the database performance or CPU with a noticeable impact as it only add some data into the archive logs,please test the logminer in your test environment then apply into production database.

--设置数据库为追加日志数据模式只是追加一些数据到日志文件中,日志量会有所增大,但不会对数据库性能和CPU消耗产生多大影响,


Solution:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA

--设置数据库为追加日志数据模式之后,重新生成的日志文件才可以挖掘出业务数据的DML记录。



ps:参考文档
Oracle8i Administrator's Guide,Release 8.1.6
Oracle8i Administrator's Guide,Release 8.1.7
Oracle9i Database Administrator's Guide Release 2 (9.2)
Oracle? Database Utilities 10g Release 1 (10.1)
Oracle? Database Utilities 10g Release 2 (10.2)
Oracle? Database Utilities 11g Release 1 (11.1)
Oracle? Database Utilities 11g Release 2 (11.2)

Introduction:
=============

LogMiner runs in an Oracle instance with the database either mounted or
unmounted. LogMiner uses a dictionary file,which is a special file that
indicates the database that created it as well as the time the file was
created. The dictionary file is not required,but is recommended. Without a
dictionary file,the equivalent SQL statements will use Oracle internal object
IDs for the object name and present column values as hex data.

For example,instead of the SQL statement:

INSERT INTO emp(name,salary) VALUES ('John Doe',50000);

LogMiner will display:

insert into Object#2581(col#1,col#2) values (hextoraw('4a6f686e20446f65'),
hextoraw('c306'));"

Create a dictionary file by mounting a database and then extracting dictionary
information into an external file.

You must create the dictionary file from the same database that generated the
log files you want to analyze. Once created,you can use the dictionary file
to analyze redo logs.

When creating the dictionary,specify the following:

* DICTIONARY_FILENAME to name the dictionary file.
* DICTIONARY_LOCATION to specify the location of the file.

LogMiner analyzes redo log files from any version 8.0.x and later Oracle
database that uses the same database characterset and is running on the same
hardware as the analyzing instance.

Note: The LogMiner packages are owned by the SYS schema. Therefore,if you
are not connected as user SYS,you must include SYS in your call. For
example:

EXECUTE SYS.DBMS_LOGMNR_D.BUILD


To Create a Dictionary File on an Oracle8 Database:
===================================================

Although LogMiner only runs on databases of release 8.1 or higher,you can
use it to analyze redo logs from release 8.0 databases.

1. Use an O/S command to copy the dbmslmd.sql script,which is contained in the
$ORACLE_HOME/rdbms/admin directory on the Oracle8i database,to the same
directory in the Oracle8 database.

For example,enter:

% cp /8.1/oracle/rdbms/admin/dbmslmd.sql /8.0/oracle/rdbms/admin/dbmslmd.sql

Note: In 8.1.5 the script is dbmslogmnrd.sql.
In 8.1.6 the script is dbmslmd.sql.

2. Use SQL*Plus to mount and then open the database whose files you want to
analyze. For example,enter:

STARTUP

3. Execute the copied dbmslmd.sql script on the 8.0 database to create the
DBMS_LOGMNR_D package.

For example,enter: @dbmslmd.sql

4. Make sure to specify an existing directory that Oracle has permissions
to write to by the PL/SQL procedure by setting the initialization
parameter UTL_FILE_DIR in the init.ora.
For example,set the following to use /8.0/oracle/logs:

UTL_FILE_DIR = /8.0/oracle/logs

Be sure to shutdown and restart the instance after adding UTL_FILE_DIR
to the init.ora.

If you do not reference this parameter,the procedure will fail.

5. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify both a file name
for the dictionary and a directory pathname for the file. This procedure
creates the dictionary file,which you should use to analyze log files.
For example,enter the following to create file dictionary.ora in
/8.0/oracle/logs:

(REMEMBER TO INCULDE THE DASH '-' CONTINUATION CHARACTER AT THE END OF
EACH LINE WHEN ENTERING A MULTI-LINE PL/SQL COMMAND IN SQL*PLUS)

EXECUTE DBMS_LOGMNR_D.BUILD(-
DICTIONARY_FILENAME =>'dictionary.ora',-
DICTIONARY_LOCATION => '/8.0/oracle/logs');

After creating the dictionary file on the Oracle 8.0.x instance,the
dictionary file and any archived logs to be mined must be moved to the
server running the 8.1.x database on which LogMiner will be run if it is
different from the server which generated the archived logs.


To Create a Dictionary File on an Oracle8i Database:
====================================================

1. Make sure to specify an existing directory that Oracle has permissions
to write to by the PL/SQL procedure by setting the initialization
parameter UTL_FILE_DIR in the init.ora.
For example,set the following to use /oracle/logs:

UTL_FILE_DIR = /oracle/logs

Be sure to shutdown and restart the instance after adding UTL_FILE_DIR
to the init.ora.

If you do not reference this parameter,the procedure will fail.

2. Use SQL*Plus to mount and then open the database whose files you want to
analyze. For example,enter:

STARTUP

3. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify both a file name
for the dictionary and a directory pathname for the file. This procedure
creates the dictionary file,enter the following to create file dictionary.ora in
/oracle/logs:

(REMEMBER TO INCULDE THE DASH '-' CONTINUATION CHARACTER AT THE END OF
EACH LINE WHEN ENTERING A MULTI-LINE PL/SQL COMMAND IN SQL*PLUS)

EXECUTE DBMS_LOGMNR_D.BUILD( -
DICTIONARY_FILENAME =>'dictionary.ora',-
DICTIONARY_LOCATION => '/oracle/logs');



To Create a Dictionary on the Oracle Database (9i and later)
====================================================
In the 9i and later releases,the ability to extract the dictionary to a flat file as well as creating a dictionary with the redo logs is available.

For example,enter the following to create the file dictionary.ora in /oracle/database:

1. Make sure to specify an existing directory that Oracle has permissions
to write to by the PL/SQL procedure by setting the initialization
parameter UTL_FILE_DIR in the init.ora.
For example,set the following to use /oracle/logs:

UTL_FILE_DIR =/oracle/database

Be sure to shutdown and restart the instance after adding UTL_FILE_DIR to the init or spfile.

If you do not reference this parameter,enter the following to create file dictionary.ora in
'/oracle/database/:

Example:
-------------
SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora',-
2 '/oracle/database/',-
3 OPTIONS => DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);



If extracting the database dictionary information to the redo logs,use the DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_FILES option and do not specify a filename or location.

Example:
-------------
SQL> EXECUTE DBMS_LOGMNR_D.BUILD ( -
2 OPTIONS=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);


Please note that to extract a dictionary to the redo logs,the database must be open and in ARCHIVELOG mode and archiving must be enabled
Also to make sure that the redo logs contain information that will provide the most value to you,you should enable at least minimal supplemental logging.

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA



Specifying Redo Logs for Analysis
=================================

Once you have created a dictionary file,you can begin analyzing redo logs.
Your first step is to specify the log files that you want to analyze using
the ADD_LOGFILE procedure. Use the following constants:

* NEW to create a new list.
* ADDFILE to add redo logs to a list.
* REMOVEFILE to remove redo logs from the list.

To Use LogMiner:

1. Use SQL*Plus to start an Oracle instance,with the database either mounted
or unmounted.

For example,enter:

STARTUP

2. Create a list of logs by specifying the NEW option when executing the
DBMS_LOGMNR.ADD_LOGFILE procedure. For example,enter the following to
specify /oracle/logs/log1.f:

(INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/oracle/logs/log1.f',-
OPTIONS => dbms_logmnr.NEW);

3. If desired,add more logs by specifying the ADDFILE option.

For example,enter the following to add /oracle/logs/log2.f:

(INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/oracle/logs/log2.f',-
OPTIONS => dbms_logmnr.ADDFILE);

4. If desired,remove logs by specifying the REMOVEFILE option.

For example,enter the following to remove /oracle/logs/log2.f:

(INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/oracle/logs/log2.f',-
OPTIONS => dbms_logmnr.REMOVEFILE);


Using LogMiner:
===============

Once you have created a dictionary file and specified which logs to analyze,
you can start LogMiner and begin your analysis. Use the following options to
narrow the range of your search at start time:


This option Specifies
=========== =========

STARTSCN The beginning of an SCN range.
ENDSCN The termination of an SCN range.
STARTTIME The beginning of a time interval.
ENDTIME The end of a time interval.
DICTFILENAME The name of the dictionary file.


Once you have started LogMiner,you can make use of the following data
dictionary views for analysis:

This view Displays information about
=================== ==================================================

V$LOGMNR_DICTIONARY The dictionary file in use.
V$LOGMNR_PARAMETERS Current parameter settings for LogMiner.
V$LOGMNR_LOGS Which redo log files are being analyzed.
V$LOGMNR_CONTENTS The contents of the redo log files being analyzed.


To Use LogMiner:
================

1. Issue the DBMS_LOGMNR.START_LOGMNR procedure to start LogMiner utility.

For example,if using the online catalog as your dictionary source,issue:

(INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)

EXECUTE DBMS_LOGMNR.START_LOGMNR(-
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

If using a dictionary file (e.g. /oracle/dictionary.ora),you would issue issue: EXECUTE DBMS_LOGMNR.START_LOGMNR( - DICTFILENAME =>'/oracle/dictionary.ora'); Optionally,set the STARTTIME and ENDTIME parameters to filter data by time. Note that the procedure expects date values: use the TO_DATE function to specify date and time,as in this example: (INCLUDE THE '-' WHEN ENTERING THE FOLLOWING) EXECUTE DBMS_LOGMNR.START_LOGMNR( - DICTFILENAME => '/oracle/dictionary.ora',- STARTTIME => to_date('01-Jan-1998 08:30:00','DD-MON-YYYY HH:MI:SS'),- ENDTIME => to_date('01-Jan-1998 08:45:00','DD-MON-YYYY HH:MI:SS')); Use the STARTSCN and ENDSCN parameters to filter data by SCN,as in this example: (INCLUDE THE '-' WHEN ENTERING THE FOLLOWING) EXECUTE DBMS_LOGMNR.START_LOGMNR( - DICTFILENAME => '/oracle/dictionary.ora',- STARTSCN => 100,- ENDSCN => 150); 2. View the output via the V$LOGMNR_CONTENTS table. LogMiner returns all rows in SCN order,which is the same order applied in media recovery. For example,the following query lists information about operations: SELECT operation,sql_redo FROM v$logmnr_contents; OPERATION SQL_REDO --------- ---------------------------------------------------------- INTERNAL INTERNAL START set transaction read write; UPDATE update SYS.UNDO$ set NAME = 'RS0',USER# = 1,FILE# = 1,BLOCK# = 2450,SCNBAS = COMMIT commit; START set transaction read write; UPDATE update SYS.UNDO$ set NAME = 'RS0',SCNBAS = COMMIT commit; 11 rows selected. Analyzing Archived Redo Log Files from Other Databases: ======================================================= You can run LogMiner on an instance of a database while analyzing redo log files from a different database. To analyze archived redo log files from other databases,LogMiner must: * Access a dictionary file that is both created from the same database as the redo log files and created with the same database character set. * Run on the same hardware platform that generated the log files,although it does not need to be on the same system. * Use redo log files that can be applied for recovery from Oracle version 8.0 and later.

(编辑:李大同)

【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!

    推荐文章
      热点阅读