国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 數據庫 > Oracle > 正文

理解和使用Oracle 8i分析工具-LogMiner

2024-08-29 13:31:59
字體:
來源:轉載
供稿:網友

oracle logminer 是oracle公司從產品8i以后提供的一個實際非常有用的分析工具,使用該工具可以輕松獲得oracle 重作日志文件(歸檔日志文件)中的具體內容,特別是,該工具可以分析出所有對于數據庫操作的dml(insert、update、delete等)語句,另外還可分析得到一些必要的回滾sql語句。該工具特別適用于調試、審計或者回退某個特定的事務。

  logminer分析工具實際上是由一組pl/sql包和一些動態視圖(oracle8i內置包的一部分)組成,它作為oracle數據庫的一部分來發布,是8i產品提供的一個完全免費的工具。但該工具和其他oracle內建工具相比使用起來顯得有些復雜,主要原因是該工具沒有提供任何的圖形用戶界面(gui)。本文將詳細介紹如何安裝以及使用該工具。

  一、logminer的用途

  日志文件中存放著所有進行數據庫恢復的數據,記錄了針對數據庫結構的每一個變化,也就是對數據庫操作的所有dml語句。

  在oracle 8i之前,oracle沒有提供任何協助數據庫管理員來讀取和解釋重作日志文件內容的工具。系統出現問題,對于一個普通的數據管理員來講,唯一可以作的工作就是將所有的log文件打包,然后發給oracle公司的技術支持,然后靜靜地等待oracle 公司技術支持給我們最后的答案。然而從8i以后,oracle提供了這樣一個強有力的工具-logminer。

  logminer 工具即可以用來分析在線,也可以用來分析離線日志文件,即可以分析本身自己數據庫的重作日志文件,也可以用來分析其他數據庫的重作日志文件。

  總的說來,logminer工具的主要用途有:

   1. 跟蹤數據庫的變化:可以離線的跟蹤數據庫的變化,而不會影響在線系統的性能。

   2. 回退數據庫的變化:回退特定的變化數據,減少point-in-time recovery的執行。

   3. 優化和擴容計劃:可通過分析日志文件中的數據以分析數據增長模式。

  二、安裝logminer

  要安裝logminer工具,必須首先要運行下面這樣兩個腳本,

   l $oracle_home/rdbms/admin/dbmslsm.sql

   2 $oracle_home/rdbms/admin/dbmslsmd.sql.

  這兩個腳本必須均以sys用戶身份運行。其中第一個腳本用來創建dbms_logmnr包,該包用來分析日志文件。第二個腳本用來創建dbms_logmnr_d包,該包用來創建數據字典文件。


  三、使用logminer工具

  下面將詳細介紹如何使用logminer工具。

  1、創建數據字典文件(data-dictionary)

  前面已經談到,logminer工具實際上是由兩個新的pl/sql內建包((dbms_logmnr 和 dbms_ logmnr_d)和四個v$動態性能視圖(視圖是在利用過程dbms_logmnr.start_logmnr啟動logminer時創建)組成。在使用logminer工具分析redo log文件之前,可以使用dbms_logmnr_d 包將數據字典導出為一個文本文件。該字典文件是可選的,但是如果沒有它,logminer解釋出來的語句中關于數據字典中的部分(如表名、列名等)和數值都將是16進制的形式,我們是無法直接理解的。例如,下面的sql語句:

insert into dm_dj_swry (rydm, rymc) values (00005, '張三');

  logminer解釋出來的結果將是下面這個樣子,

insert into object#308(col#1, col#2) values (hextoraw('c30rte567e436'), hextoraw('4a6f686e20446f65'));

  創建數據字典的目的就是讓logminer引用涉及到內部數據字典中的部分時為他們實際的名字,而不是系統內部的16進制。數據字典文件是一個文本文件,使用包dbms_logmnr_d來創建。如果我們要分析的數據庫中的表有變化,影響到庫的數據字典也發生變化,這時就需要重新創建該字典文件。另外一種情況是在分析另外一個數據庫文件的重作日志時,也必須要重新生成一遍被分析數據庫的數據字典文件。

  首先在init.ora初始化參數文件中,指定數據字典文件的位置,也就是添加一個參數utl_file_dir,該參數值為服務器中放置數據字典文件的目錄。如:

utl_file_dir = (e:/oracle/logs)

  重新啟動數據庫,使新加的參數生效,然后創建數據字典文件:

sql> connect sys
sql> execute dbms_logmnr_d.build(
dictionary_filename => ' v816dict.ora',
dictionary_location => 'e:/oracle/logs');

 2、創建要分析的日志文件列表

  oracle的重作日志分為兩種,在線(online)和離線(offline)歸檔日志文件,下面就分別來討論這兩種不同日志文件的列表創建。

 ?。?)分析在線重作日志文件

  a. 創建列表

sql> execute dbms_logmnr.add_logfile(
logfilename=>' e:/oracle/oradata/sxf/redo01.log',
options=>dbms_logmnr.new);

  b. 添加其他日志文件到列表

sql> execute dbms_logmnr.add_logfile(
logfilename=>' e:/oracle/oradata/sxf/redo02.log',
options=>dbms_logmnr.addfile);

  (2)分析離線日志文件

  a.創建列表

sql> execute dbms_logmnr.add_logfile(
logfilename=>' e:/oracle/oradata/sxf/archive/arcarc09108.001',
options=>dbms_logmnr.new);

  b.添加另外的日志文件到列表

sql> execute dbms_logmnr.add_logfile(
logfilename=>' e:/oracle/oradata/sxf/archive/arcarc09109.001',
options=>dbms_logmnr.addfile);

  關于這個日志文件列表中需要分析日志文件的個數完全由你自己決定,但這里建議最好是每次只添加一個需要分析的日志文件,在對該文件分析完畢后,再添加另外的文件。

  和添加日志分析列表相對應,使用過程 'dbms_logmnr.removefile' 也可以從列表中移去一個日志文件。下面的例子移去上面添加的日志文件e:/oracle/oradata/sxf/redo02.log。

sql> execute dbms_logmnr.add_logfile(
logfilename=>' e:/oracle/oradata/sxf/redo02.log',
options=>dbms_logmnr. removefile);

  創建了要分析的日志文件列表,下面就可以對其進行分析了。

  3、使用logminer進行日志分析

 ?。?)無限制條件

sql> execute dbms_logmnr.start_logmnr(
dictfilename=>' e:/oracle/logs/ v816dict.ora ');

 ?。?)有限制條件

  通過對過程dbms_ logmnr.start_logmnr中幾個不同參數的設置(參數含義見表1),可以縮小要分析日志文件的范圍。通過設置起始時間和終止時間參數我們可以限制只分析某一時間范圍的日志。如下面的例子,我們僅僅分析2001年9月18日的日志,:

sql> execute dbms_logmnr.start_logmnr(
dictfilename => ' e:/oracle/logs/ v816dict.ora ',
starttime => to_date('2001-9-18 00:00:00','yyyy-mm-dd hh24:mi:ss')
endtime => to_date(''2001-9-18 23:59:59','yyyy-mm-dd hh24:mi:ss '));

  也可以通過設置起始scn和截至scn來限制要分析日志的范圍:

sql> execute dbms_logmnr.start_logmnr(
dictfilename => ' e:/oracle/logs/ v816dict.ora ',
startscn => 20,
endscn => 50);

  表1 dbms_logmnr.start__logmnr過程參數含義

參數 參數類型 默認值  含義
startscn 數字型(number) 0  分析重作日志中scn≥startscn日志文件部分
endscn 數字型(number) 0  分析重作日志中scn≤endscn日志文件部分
starttime 日期型(date) 1998-01-01  分析重作日志中時間戳≥starttime的日志文件部分
endtime 日期型(date) 2988-01-01  分析重作日志中時間戳≤endtime的日志文件部分
dictfilename 字符型(varchar2)   字典文件,該文件包含一個數據庫目錄的快照。使用該文件可以使得到的分析結果是可以理解的文本形式,而非系統內部的16進制
options binary_integer 0  系統調試參數,實際很少使用

  4、觀察分析結果(v$logmnr_contents)

  到現在為止,我們已經分析得到了重作日志文件中的內容。動態性能視圖v$logmnr_contents包含logminer分析得到的所有的信息。

select sql_redo from v$logmnr_contents;

  如果我們僅僅想知道某個用戶對于某張表的操作,可以通過下面的sql查詢得到,該查詢可以得到用戶db_zgxt對表sb_djjl所作的一切工作。

sql> select sql_redo from v$logmnr_contents where username='db_zgxt' and tablename='sb_djjl';

  需要強調一點的是,視圖v$logmnr_contents中的分析結果僅在我們運行過程'dbms_logmrn.start_logmnr'這個會話的生命期中存在。這是因為所有的logminer存儲都在pga內存中,所有其他的進程是看不到它的,同時隨著進程的結束,分析結果也隨之消失。

  最后,使用過程dbms_logmnr.end_logmnr終止日志分析事務,此時pga內存區域被清除,分析結果也隨之不再存在。

  四、其他注意事項

  我們可以利用logminer日志分析工具來分析其他數據庫實例產生的重作日志文件,而不僅僅用來分析本身安裝logminer的數據庫實例的redo logs文件。使用logminer分析其他數據庫實例時,有幾點需要注意:

  1. logminer必須使用被分析數據庫實例產生的字典文件,而不是安裝logminer的數據庫產生的字典文件,另外必須保證安裝logminer數據庫的字符集和被分析數據庫的字符集相同。

  2. 被分析數據庫平臺必須和當前logminer所在數據庫平臺一樣,也就是說如果我們要分析的文件是由運行在unix平臺上的oracle 8i產生的,那么也必須在一個運行在unix平臺上的oracle實例上運行logminer,而不能在其他如microsoft nt上運行logminer。當然兩者的硬件條件不一定要求完全一樣。

  3. logminer日志分析工具僅能夠分析oracle 8以后的產品,對于8以前的產品,該工具也無能為力。

  五、結語

  logminer對于數據庫管理員(dba)來講是個功能非常強大的工具,也是在日常工作中經常要用到的一個工具,借助于該工具,可以得到大量的關于數據庫活動的信息。其中一個最重要的用途就是不用全部恢復數據庫就可以恢復數據庫的某個變化。另外,該工具還可用來監視或者審計用戶的活動,如你可以利用logminer工具察看誰曾經修改了那些數據以及這些數據在修改前的狀態。我們也可以借助于該工具分析任何oracle 8及其以后版本產生的重作日志文件。另外該工具還有一個非常重要的特點就是可以分析其他數據庫的日志文件。總之,該工具對于數據庫管理員來講,是一個非常有效的工具,深刻理解及熟練掌握該工具,對于每一個數據庫管理員的實際工作是非常有幫助的。

可能翻譯有誤, 我找來原文給你看看
purpose
  this paper details the mechanics of what logminer does, as well as detailing
  the commands and environment it uses.

scope & application
  for dbas requiring further information about logminer.

  the ability to provide a readable interface to the redo logs has been asked 
  for by customers for a long time. the alter sytstem dump logfile interface 
  has been around for a long time, though its usefulness outside support is 
  limited. there have been a number of third party products, e.g. bmc's patrol
  db-logmaster (sql*trax as was), which provide some functionality in this 
  area. with oracle release 8.1 there is a facility in the oracle kernel to do
  the same. logminer allows the dba to audit changes to data and performs 
  analysis on the redo to determine trends, aid in capacity planning, 
  point-in-time recovery etc. 
  
related documents
 [note:117580.1]  ora-356, ora-353, & ora-334 errors when mining logs with
                  different db_block_size
oracle8i  - 8.1 logminer:
=========================
 
1. what does logminer do?
=========================

  logminer can be used against online or archived logs from either the 
  'current' database or a 'foreign' database. the reason for this is that it 
  uses an external dictionary file to access meta-data, rather than the 
  'current' data dictionary.

  it is important that this dictionary file is kept in step with the database 
  which is being analyzed. if the dictionary used is out of step from the redo
  then analysis will be considerably more difficult. building the external 
  dictionary will be discussed in detail in section 3.
 
  logminer scans the log/logs it is interested in, and generates, using the 
  dictionary file meta-data, a set of sql statements which would have the same
  effect on the database as applying the corresponding redo record.

  logminer prints out the 'final' sql that would have gone against the 
  database. for example:

      insert into table x values ( 5 );
      update table x set column=newvalue where rowid='<>' 
      delete from table x where rowid='<>' and column=value and column=value

  we do not actually see the sql that was issued, rather an executable sql 
  statement that would have the same effect. since it is also stored in the 
  same redo record, we also generate the undo column which would be necessary 
  to roll this change out.

  for sql which rolls back, no undo sql is generated, and the rollback flag is
  set. an insert followed by a rollback therefore looks like: 

      redo                              undo              rollback 

      insert sql                        delete sql        0
      delete sql                        <null>            1

  because it operates against the physical redo records, multirow operations
  are not recorded in the same manner e.g. delete from emp where deptno=30
  might delete 100 rows in the sales department in a single statement, the 
  corresponding logminer output would show one row of output per row in the 
  database.


2. what it does not do
======================

  1. 'trace' application sql - use sql_trace/10046

     since logminer only generates low-level sql, not what was issued, you 
     cannot use logminer to see exactly what was being done based on the sql. 
     what you can see, is what user changed what data at what time.

  2. 'replicate' an application  

     logminer does not cover everything. also, since ddl is not supported 
     (the insert into the tab$ etc. is, however the create table is not). 

  3. access data dictionary sql in a visible form

     especially update user$ set password=<newpassword>.


  other known current limitations
  ===============================

  logminer cannot cope with objects.
  logminer cannot cope with chained/migrated rows.
  logminer produces fairly unreadable output if there is no record of the 
  table in the dictionary file. see below for output.
  
  the database where the analysis is being performed must have a block size 
  of at least equal to that of the originating database. see [note:117580.1].
  


3. functionality
================

  the logminer feature is made up of three procedures in the logminer 
  (dbms_logmnr) package, and one in the dictionary (dbms_logmnr_d). 

  these are built by the following scripts: (run by catproc)
 
      $oracle_home/rdbms/admin/dbmslogmnrd.sql
      $oracle_home/rdbms/admin/dbmslogmnr.sql
      $oracle_home/rdbms/admin/prvtlogmnr.plb

  since 8.1.6:
 
      $oracle_home/rdbms/admin/dbmslmd.sql
      $oracle_home/rdbms/admin/dbmslm.sql
      $oracle_home/rdbms/admin/prvtlm.plb


  1. dbms_logmnr_d.build 

     this procedure builds the dictionary file used by the main logminer
     package to resolve object names, and column datatypes. it should be 
     generated relatively frequently, since otherwise newer objects will not 
     be recorded.

     it is possible to generate a dictionary file from an 8.0.database and 
     use it to analyze oracle 8.0 redo logs. in order to do this run 
     "dbmslogmnrd.sql" against the 8.0 database, then follow the procedure as 
     below. all analysis of the logfiles will have to take place while 
     connected to an 8.1 database since dbms_logmnr cannot operate against 
     oracle 8.0 because it uses trusted callouts.

     any redo relating to tables which are not included in the dictionary 
     file are dumped raw. example: if logminer cannot resolve the table and 
     column references, then the following is output: (insert statement)

         insert into unknown.objn:xxxx(col[x],....) values 
            ( hextoraw('xxxxxx'), hextoraw('xxxxx')......)

     parameters
     ==========

     1. the name of the dictionary file you want to produce.
     2. the name of the directory where you want the file produced. 

     the directory must be writeable by the server i.e. included in
     utl_file_dir path.  
  
     example
     =======

     begin
        dbms_logmnr_d.build(
          dictionary_filename=> 'miner_dictionary.dic',
          dictionary_location => '/export/home/sme81/aholland/testcases
          /logminer'
                            );
     end;
     /

  the dbms_logmnr package actually performs the redo analysis.  

  2. dbms_logmnr.add_logfile 

     this procedure registers the logfiles to be analyzed in this session. it
     must be called once for each logfile. this populates the fixed table
     x$logmnr_logs (v$logmnr_logs) with a row corresponding to the logfile.

     parameters 
     ===========

     1. the logfile to be analyzed.
     2. option 
        dbms_logmnr.new (session) first file to be put into pga memory. 
           this initialises the v$logmnr_logs table.
           and 
        dbms_logmnr.addfile 
           adds another logfile to the v$logmnr_logs pga memory. 
           has the same effect as new if there are no rows there 
           presently.

        dbms_logmnr.removefile 
           removes a row from v$logmnr_logs.

     example 
     =======

     include all my online logs for analysis.........

     begin
        dbms_logmnr.add_logfile(
           '/export/home/sme81/aholland/database/files/redo03.log',
                              dbms_logmnr.new );
        dbms_logmnr.add_logfile(
           '/export/home/sme81/aholland/database/files/redo02.log',
                              dbms_logmnr.addfile );
        dbms_logmnr.add_logfile(
           '/export/home/sme81/aholland/database/files/redo01.log',
                              dbms_logmnr.addfile );
     end;
     /

     full path should be required, though an environment variable 
     is accepted. this is not expanded in v$logmnr_logs. 


  3. dbms_logmnr.start_logmnr;

     this package populates v$logmnr_dictionary, v$logmnr_parameters, 
     and v$logmnr_contents.

     parameters
     ==========

     1.  startscn      default 0 
     2.  endscn        default 0,
     3.  starttime     default '01-jan-1988'
     4.  endtime       default '01-jan-2988'
     5.  dictfilename  default '',
     6.  options       default 0  debug flag - uninvestigated as yet

     a point to note here is that there are comparisions made between the 
     scns, the times entered, and the range of values in the file. if the scn 
     range or the start/end range are not wholly contained in this log, then 
     the start_logmnr command will fail with the general error: 
         ora-01280 fatal logminer error.

  4. dbms_logmnr.end_logmnr; 

     this is called with no parameters. 

     /* this is very important for support */

     this procedure must be called prior to exiting the session that was 
     performing the analysis. this is because of the way the pga is used to 
     store the dictionary definitions from the dictionary file, and the 
     v$logmnr_contents output. 
     if you do not call end_logmnr, you will silently get ora-00600 [723] ...
     on logoff. this oeri is triggered because the pga is bigger at logoff 
     than it was at logon, which is considered a space leak. the main problem 
     from a support perspective is that it is silent, i.e. not signalled back 
     to the user screen, because by then they have logged off. 

     the way to spot logminer leaks is that the trace file produced by the 
     oeri 723 will have a pga heap dumped with many chunks of type 'freeable'
     with a description of "krvd:alh" 

4. output 
=========

  effectively, the output from logminer is the contents of v$logmnr_contents.
  the output is only visible during the life of the session which runs 
  start_logmnr. this is because all the logminer memory is pga memory, so it 
  is neither visible to other sessions, nor is it persistent. as the session 
  logs off, either dbms_logmnr.end_logmnr is run to clear out the pga, or an 
  oeri 723 is signalled as described above. 

  typically users are going to want to output sql_redo based on queries by 
  timestamp, segment_name or rowid. 


  v$logmnr_contents
  name                            null?    type
  ------------------------------- -------- ----
  scn                                      number
  timestamp                                date
  thread#                                  number
  log_id                                   number
  xidusn                                   number
  xidslt                                   number
  xidsqn                                   number
  rbasqn                                   number
  rbablk                                   number
  rbabyte                                  number
  ubafil                                   number
  ubablk                                   number
  ubarec                                   number
  ubasqn                                   number
  abs_file#                                number
  rel_file#                                number
  data_blk#                                number
  data_obj#                                number
  data_objd#                               number
  seg_owner                                varchar2(32)
  seg_name                                 varchar2(32)
  seg_type                                 varchar2(32)
  table_space                              varchar2(32)
  row_id                                   varchar2(19)
  session#                                 number
  serial#                                  number
  username                                 varchar2(32)
  rollback                                 number
  operation                                varchar2(32)
  sql_redo                                 varchar2(4000)
  sql_undo                                 varchar2(4000)
  rs_id                                    varchar2(32)
  ssn                                      number
  csf                                      number
  info                                     varchar2(32)
  status                                   number
  ph1_name                                 varchar2(32)
  ph1_redo                                 varchar2(4000)
  ph1_undo                                 varchar2(4000)
  ph2_name                                 varchar2(32)
  ph2_redo                                 varchar2(4000)
  ph2_undo                                 varchar2(4000)
  ph3_name                                 varchar2(32)
  ph3_redo                                 varchar2(4000)
  ph3_undo                                 varchar2(4000)
  ph4_name                                 varchar2(32)
  ph4_redo                                 varchar2(4000)
  ph4_undo                                 varchar2(4000)
  ph5_name                                 varchar2(32)
  ph5_redo                                 varchar2(4000)
  ph5_undo                                 varchar2(4000)

  sql> set heading off
  sql> select scn, username, sql_undo from v$logmnr_contents
          where segment_name = 'emp';

  12134756        scott           insert (...) into emp;
  12156488        scott           delete from emp where empno = ...
  12849455        scott           update emp set mgr =

  this will return the results of an sql statement without the column
  headings.  the columns that you are really going to want to query are the
  "sql_undo" and "sql_redo" values because they give the transaction details 
  and syntax.


5. placeholders
===============

  in order to allow users to be able to query directly on specific data 
  values, there are up to five placeholders included at the end of 
  v$logmnr_contents. when enabled, a user can query on the specific before and
  after values of a specific field, rather than a %like% query against the 
  sql_undo/redo fields. this is implemented via an external file called 
  "logmnr.opt". (see the supplied packages manual entry on dbms_logmnr for 
  further details.) the file must exist in the same directory as the 
  dictionary file used, and contains the prototype mappings of the phx fields 
  to the fields in the table being analyzed.

     example entry
     =============
     colmap =  scott emp ( empno, 1, ename, 2, sal, 3 ); 

  in the above example, when a redo record is encountered for the scott.emp
  table, the full statement redo and undo information populates the sql_redo 
  and sql_undo columns respectively, however the ph3_name, ph3_redo and 
  ph3_undo columns will also be populated with  'sal' , <newvalue>, <oldvalue>
  respectively,which means that the analyst can query in the form.

      select * from v$logmnr_contents 
      where seg_name ='emp'
      and ph3_name='sal'
      and ph3_redo=1000000;

  the returned ph3_undo column would return the value prior to the update. 
  this enables much more efficient queries to be run against v$logmnr_contents
  view, and if, for instance, a ctas was issued to store a physical copy, the
  column can be indexed.

發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
主站蜘蛛池模板: 尉氏县| 陆丰市| 临江市| 万州区| 芷江| 靖远县| 吕梁市| 阳西县| 盘山县| 思茅市| 托克逊县| 咸宁市| 银川市| 高清| 永和县| 绥化市| 太康县| 比如县| 凤山市| 衡阳县| 江山市| 沁源县| 顺义区| 城固县| 保康县| 邢台市| 镇康县| 家居| 武冈市| 义马市| 高台县| 天镇县| 益阳市| 临城县| 洛南县| 沧源| 惠州市| 无极县| 扎囊县| 河源市| 神木县|