Pages

Thursday, January 21, 2016

Calling web service from Database using PLSQL


Part-1Compile the below procedure & execute as you have required.

CREATE OR REPLACE PROCEDURE p_soap_request(p_username IN VARCHAR2, p_password IN VARCHAR2, p_proxy IN VARCHAR2) 
 IS 
  soap_request VARCHAR2(30000); 
  soap_respond CLOB; 
  http_req   utl_http.req; 
  http_resp   utl_http.resp; 
  resp     XMLType; 
  soap_err   exception; 
  v_code    VARCHAR2(200); 
  v_msg     VARCHAR2(1800); 
  v_len     number; 
  v_txt     Varchar2(32767); 
 BEGIN 
   UTL_HTTP.SET_PROXY(p_proxy); 
   -- Define the SOAP request according the the definition of the web service being called 
   soap_request:= '<?xml version = "1.0" encoding = "UTF-8"?>'|| 
           '<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">'|| 
           ' <SOAP-ENV:Body>'|| 
           '  <m:DownloadRequest xmlns:m="http://www.website.net/messages/GetDetails">'|| 
           '   <m:UserName>'||p_username||'</m:UserName>'|| 
           '   <m:Password>'||p_password||'</m:Password>'|| 
           '  </m:DownloadRequest>'|| 
           ' </SOAP-ENV:Body>'|| 
           '</SOAP-ENV:Envelope>'; 
   http_req:= utl_http.begin_request 
        ( 'http://www.website.net/webservices/GetDetailsService.asmx' 
        , 'POST' 
        , 'HTTP/1.1'); 
   utl_http.set_header(http_req, 'Content-Type', 'text/xml'); 
   utl_http.set_header(http_req, 'Content-Length', length(soap_request)); 
   utl_http.set_header(http_req, 'Download', ''); -- header requirements of particular web service 
   utl_http.write_text(http_req, soap_request); 
   http_resp:= utl_http.get_response(http_req); 
   utl_http.get_header_by_name(http_resp, 'Content-Length', v_len, 1); -- Obtain the length of the response 
   FOR i in 1..CEIL(v_len/32767) -- obtain response in 32K blocks just in case it is greater than 32K 
   LOOP 
     utl_http.read_text(http_resp, v_txt, case when i < CEIL(v_len/32767) then 32767 else mod(v_len,32767) end); 
     soap_respond := soap_respond || v_txt; -- build up CLOB 
   END LOOP; 
   utl_http.end_response(http_resp); 
   resp:= XMLType.createXML(soap_respond); -- Convert CLOB to XMLTYPE 
 Exception 
  when others then null; 
 END; 


Execute the above procedure as below:

Begin  
   p_soap_request(p_username =>'' ,p_password =>'' ,p_proxy =>'' );  
End;


Part-2: XML parsing after getting web service call (Assume that you know the expected output of XML).

Declare  
  response_env   varchar2(32767);  
  v_xml       XMLTYPE;  
  l_user_first_name varchar2(1000);  
  l_user_last_name varchar2(100);  
  l_error_value   varchar2(100);  
  ns        varchar2(200) := 'xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:axis2ns1="http://www.test.com/services"';  
 Begin  
  response_env := '<?xml version="1.0" encoding="utf-8"?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">  
  <soapenv:Body>  
  <UserInfoObjResponse xmlns:axis2ns2="http://www.test.com/services">  
  <axis2ns2:FirstName >First Name Output</axis2ns2:FirstName>  
  <axis2ns2:LastName>Last Name Output</axis2ns2:LastName>  
  <axis2ns2:ErrorDescription/>  
  </UserInfoObjResponse>  
  </soapenv:Body>  
  </soapenv:Envelope>';  
  v_xml := XMLTYPE(response_env);  
  select EXTRACTVALUE(v_xml,'/soapenv:Envelope/soapenv:Body/UserInfoObjResponse/axis2ns1:FirstName',ns),  
      EXTRACTVALUE(v_xml,'/soapenv:Envelope/soapenv:Body/UserInfoObjResponse/axis2ns1:LastName',ns),  
      EXTRACTVALUE(v_xml,'/soapenv:Envelope/soapenv:Body/UserInfoObjResponse/axis2ns1:ErrorDescription',ns)  
   into l_user_first_name, l_user_last_name, l_error_value  
   from dual;  
  dbms_output.put_line('l_user_first_name: ' || l_user_first_name);  
  dbms_output.put_line('l_user_last_name : ' || l_user_last_name);  
 End;  


Wednesday, January 20, 2016

Convert CSV file from Excel in batch mode by using DOS command


By using below script, you can convert CSV file from Excel (xls /xlsx) file in batch mode silently. Please follow below steps to convert your required file.

Step-01:  Copy the below VB script in a text file, paste it & save the file as “ExcelToCSV.vbs”.

if WScript.Arguments.Count < 2 Then
    WScript.Echo "Please specify the source and the destination files. Usage: ExcelToCsv <xls/xlsx source file> <csv destination file>"
    Wscript.Quit
End If

csv_format = 6

Set objFSO = CreateObject("Scripting.FileSystemObject")

src_file = objFSO.GetAbsolutePathName(Wscript.Arguments.Item(0))
dest_file = objFSO.GetAbsolutePathName(WScript.Arguments.Item(1))

Dim oExcel
Set oExcel = CreateObject("Excel.Application")

Dim oBook
Set oBook = oExcel.Workbooks.Open(src_file)

oBook.SaveAs dest_file, csv_format

oBook.Close False
oExcel.Quit

Step-02: Copy the below script in a text file & save the file as “ExcelToCsv.bat”.

FOR /f "delims=" %%i IN ('DIR *.xls /b') DO ExcelToCSV.vbs "%%i" "%%i.csv"

Step-03: Now copy the “ExcelToCSV.vbs” & “ExcelToCsv.bat” files in your expected folder, where your EXCEL file is stored.

Step-04: Now double click on the ExcelToCsv.bat file & wait to complete the conversion.


Thank You.

Tuesday, January 19, 2016

Writing efficient SQL Statements as part of SQL Tuning



Developing Efficient SQL Statements: Developing efficient SQL statement is one of the important parts of SQL tuning. You can optimize your SQL while developing something for production which is beyond the system capacilty.

1.       Verifying Optimizer Statistics
2.       Reviewing the Execution Plan
3.       Restructuring the SQL Statements
4.       Restructuring the Indexes
5.       Modifying or Disabling Triggers and Constraints
6.       Restructuring the Data
7.       Maintaining Execution Plans Over Time
8.       Visiting Data as Few Times as Possible

Verifying Optimizer Statistics
The query optimizer uses statistics gathered on tables and indexes when determining the optimal execution plan. If these statistics have not been gathered, or if the statistics are no longer representative of the data stored within the database, then the optimizer does not have sufficient information to generate the best plan.

Reviewing the Execution Plan
When tuning or writing a SQL statement in an OLTP environment, the goal is to drive from the table that has the most selective filter. This means that there are fewer rows passed to the next step. If the next step is a join, then this means that fewer rows are joined. Check to see whether the access paths are optimal. While examining the optimizer execution plan, keep in mind the followings:
1.       The plan is such that the driving table has the best filter.
2.       The join order in each step means that the fewest number of rows are being returned to the next step (that is, the join order should reflect, where possible, going to the best not-yet-used filters).
3.       The join method is appropriate for the number of rows being returned. For example, nested loop joins through indexes may not be optimal when many rows are being returned.
4.       Views are used efficiently. Look at the SELECT list to see whether access to the view is necessary.
5.       There are any unintentional Cartesian products (even with small tables).
6.       Each table is being accessed efficiently.

Restructuring the SQL Statements
1.       Compose Predicates Using AND and =: To improve SQL efficiency, use equijoins whenever possible. Statements that perform equijoins on untransformed column values are the easiest to tune.
2.       Avoid Transformed Columns in the WHERE Clause:
 Use untransformed column values. For example, use:  WHERE a.order_no = b.order_no
Rather than: WHERE TO_NUMBER (SUBSTR(a.order_no, INSTR(b.order_no, '.') - 1))
               = TO_NUMBER (SUBSTR(a.order_no, INSTR(b.order_no, '.') - 1))
 
Do not use SQL functions in predicate clauses or WHERE clauses. Any expression using a column, such as a function having the column as its argument, causes the optimizer to ignore the possibility of using an index on that column, even a unique index, unless there is a function-based index defined that the database can use.

3.       Write Separate SQL Statements for Specific Tasks: SQL is not a procedural language. Using one piece of SQL to do many different things usually results in a less-than-optimal result for each task. If you want SQL to accomplish different things, then write various statements, rather than writing one statement to do different things depending on the parameters you give it.
4.       Controlling the Access Path and Join Order with Hints: You can influence the optimizer's choices by setting the optimizer approach and goal, and by gathering representative statistics for the query optimizer. Sometimes, the application designer, who has more information about a particular application's data than is available to the optimizer, can choose a more effective way to execute a SQL statement. You can use hints in SQL statements to instruct the optimizer about how the statement should be executed.

Hints, such as /*+FULL */ control access paths. For example:

SELECT /*+ FULL(e) */ e.last_name
  FROM employees e
 WHERE e.job_id = 'CLERK';
 
5.       Use Caution When Managing Views  &  When Joining Complex Views
6.       Use Caution When Performing Outer Joins to Views
7.       Store Intermediate Results: Storing intermediate results in staging tables could improve application performance.

Restructuring the Indexes:
This can involve the following:
1.       Remove nonselective indexes to speed the DML.
2.       Index performance-critical access paths.
3.       Consider reordering columns in existing concatenated indexes.
4.       Add columns to the index to improve selectivity.

Modifying or Disabling Triggers and Constraints
Using triggers consumes system resources. If you use too many triggers, then you can find that performance is adversely affected and you might need to modify or disable them.

Restructuring the Data
After restructuring the indexes and the statement, consider restructuring the data:
1.       Introduce derived values. Avoid GROUP BY in response-critical code.
2.       Review your data design. Change the design of your system if it can improve performance.
3.       Consider partitioning, if appropriate

Maintaining Execution Plans Over Time
You can maintain the existing execution plan of SQL statements over time either using stored statistics or SQL plan baselines. Storing optimizer statistics for tables will apply to all SQL statements that refer to those tables. Storing an execution plan as a SQL plan baseline maintains the plan for set of SQL statements. If both statistics and a SQL plan baseline are available for a SQL statement, the optimizer will first use a cost-based search method to build a best-cost plan, then it will try to find a matching plan in the SQL plan baseline. If a match is found, the optimizer will proceed using this plan.

Visiting Data as Few Times as Possible
Applications should try to access each row only once. This reduces network traffic and reduces database load. Consider doing the following:
1.       Combine Multiples Scans with CASE Statements
2.       Use DML with RETURNING Clause
3.       Modify All the Data Needed in One Statement

The End






What is DATA PUMP?



What is data pump in Oracle?
Oracle Data Pump is a newer, faster and more flexible alternative to the "exp" and "imp" utilities used in previous Oracle versions. In addition to basic import and export functionality data pump provides a PL/SQL API and support for external tables.
Oracle Data Pump is a new and unique feature of Oracle Database. A new public interface package, DBMS_DATAPUMP, provides a server-side infrastructure for fast data and metadata movement between Oracle databases. It is ideal for large databases and data warehousing environments, where high-performance data movement offers significant time savings to database administrators.
Data Pump automatically manages multiple, parallel streams of unload and load for maximum throughput. The degree of parallelism can be adjusted on-the-fly. There are new and easy-to-use Export and Import utilities (expdp and impdp), as well as a web-based Enterprise Manager export/import interface.

Key Features:
·         Fast Performance: Operations performed with the new Data Pump Export and Import utilities are typically much faster than operations performed with the original Export and Import utilities. With Data Pump Export, when the direct path method of unloading is used, a single stream of data unload is about two times faster than original Export. With Data Pump Import, a single stream of data load is about 15-45 times faster than original Import.
·         Improved Management Restart: Every Data Pump operation has a master table that is created in the schema of the user running a Data Pump job. The master table maintains information about all aspects of the job, such as the current state of every object exported or imported and its location in the dump file set.
·         Fine-Grained Object Selection: A Data Pump job can exclude or include virtually any type of object and any subset of objects within a type. The following client parameters are used:
1.       The EXCLUDE parameter filters the metadata that is exported and imported by specifying objects and object types to be excluded from the current operation. An optional name qualifier can be used for finer selectivity within each object type specified.
2.       The INCLUDE parameter filters the metadata that is exported and imported by specifying objects and object types to be included for the current operation. An optional name qualifier can be used for finer selectivity within each object type specified.
3.       The CONTENT parameter specifies what is exported or imported: metadata only, data only, or both.
4.       The QUERY parameter filters data by specifying a clause for a SQL SELECT statement, which is applied to all tables in the export job or to a specific table.