,

"INS-30071" Error During Upgrade Oracle Client or Server From 11.2.0.2 To 11.2.0.3

среда, 6 ноября 2013 г. 0 коммент.

I got the error “[INS-30071] An error occured while trying to determine the running processes or services” during upgrade Oracle Client from 11.2.0.2 to 11.2.03.

Cause: 11.2.0.3 installation is tried on top of 11.2.0.2 location.
Action: Install 11.2.0.3 client at different location, then uninstall 11.2.0.2 client.

Читать полностью

,

[PCSF_46007] No gateway connectivity is provided for domain. ERROR: Cannot connect to Integration Service.

пятница, 1 ноября 2013 г. 0 коммент.

Error: [PCSF_46007] No gateway connectivity is provided for domain [Domain_vminf]. ERROR: Cannot connect to Integration Service [is_vminf].


C:\Informatica\9.5.1\clients\PowerCenterClient\CommandLineUtilities\PC\server\bin>pmcmd startworkflow -sv is_vminf -d Domain_vminf -u Administrator -p Administrator -f TEST -rin wf_test_0001 wf_test

Informatica(r) PMCMD, version [9.5.1 HotFix3], build [261.0906], Windows 32-bit
Copyright (c) Informatica Corporation 1994 - 2013
All Rights Reserved.

Invoked at Fri Nov 01 15:39:04 2013

[PCSF_46007] No gateway connectivity is provided for domain [Domain_vminf].
ERROR: Cannot connect to Integration Service [is_vminf].

Completed at Fri Nov 01 15:39:05 2013

C:\Informatica\9.5.1\clients\PowerCenterClient\CommandLineUtilities\PC\server\bin>

Check the connection:


C:\Informatica\9.5.1\clients\PowerCenterClient\CommandLineUtilities\PC\server\bin>pmcmd pingservice -sv is_vminf -d Domain_vminf

Informatica(r) PMCMD, version [9.5.1 HotFix3], build [261.0906], Windows 32-bit
Copyright (c) Informatica Corporation 1994 - 2013
All Rights Reserved.

Invoked at Fri Nov 01 15:48:52 2013

[PCSF_46007] No gateway connectivity is provided for domain [Domain_vminf].
ERROR: Cannot connect to Integration Service [is_vminf].

Completed at Fri Nov 01 15:48:53 2013

C:\Informatica\9.5.1\clients\PowerCenterClient\CommandLineUtilities\PC\server\bin>

Cause: The pmcmd command is not able to find the domains.infa file to read the information.


Action: Set the environment variable INFA_DOMAINS_FILE.


C:\Informatica\9.5.1\clients\PowerCenterClient\CommandLineUtilities\PC\server\bin>set INFA_DOMAINS_FILE=C:\Informatica\9.5.1\clients\PowerCenterClient\domains.infa


C:\Informatica\9.5.1\clients\PowerCenterClient\CommandLineUtilities\PC\server\bin>pmcmd startworkflow -sv is_vminf -d Domain_vminf -u Administrator -p Administrator -f TEST -rin wf_test_0001 wf_test

Informatica(r) PMCMD, version [9.5.1 HotFix3], build [261.0906], Windows 32-bit
Copyright (c) Informatica Corporation 1994 - 2013
All Rights Reserved.

Invoked at Fri Nov 01 15:54:36 2013

Connected to Integration Service: [is_vminf].
Starting workflow [wf_test]
INFO: Starting execution of workflow [wf_test] in folder [TEST] last saved by user [Administrator].
Workflow wf_test with run instance name [WF_TEST_0001] and run id [42] started successfully.
Disconnecting from Integration Service

Completed at Fri Nov 01 15:54:38 2013

C:\Informatica\9.5.1\clients\PowerCenterClient\CommandLineUtilities\PC\server\bin>

Читать полностью

,

How to change sys password

вторник, 29 октября 2013 г. 1 коммент.

Use the following:


SQL> connect / as sysdba
SQL> alter user identified by ;

Action:


C:\>sqlplus /nolog

SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 29 15:08:10 2013

Copyright (c) 1982, 2011, Oracle. All rights reserved.

SQL> conn / as sysdba
Connected.
SQL> alter user sys identified by oracle;

User altered.

SQL> exit

SQL> conn sys/oracle@sieb361 as sysdba
Connected.
SQL> exit;

Читать полностью

, ,

Size of CLOB or BLOB in Oracle DB

пятница, 25 октября 2013 г. 0 коммент.

This is a simple query that returns the size of the blob or clob.


select '&TABLE' as table_name, '&COLUMN_NAME' as column_name, sum(dba_segments.BYTES) / 1024 / 1024 as Mb
from dba_segments
where (dba_segments.owner, dba_segments.segment_name) in
(
-- lob segments
select dba_lobs.owner, dba_lobs.segment_name
from dba_lobs
where dba_lobs.owner = '&OWNER' and
dba_lobs.table_name = '&TABLE' and
dba_lobs.column_name = '&COLUMN_NAME'
union all
-- lob index segments
select dba_lobs.owner, dba_lobs.index_name
from dba_lobs
where dba_lobs.owner = '&OWNER' and
dba_lobs.table_name = '&TABLE' and
dba_lobs.column_name = '&COLUMN_NAME'
);

See also:

Size of table in Oracle
Размер Clob


Читать полностью

,

How to transform the set of values in a table

четверг, 24 октября 2013 г. 0 коммент.

Transform the set of values in a table:


SQL>
SQL> select column_value
2 from table ( sys.odcinumberlist(20, 30) );

COLUMN_VALUE
------------
20
30

SQL>
SQL> select column_value
2 from table ( sys.odcivarchar2list('First', 'Second') );

COLUMN_VALUE
--------------------------------------------------------------------------------
First
Second

SQL>
SQL> select column_value
2 from table ( sys.odcidatelist(sysdate, sysdate + 1) );

COLUMN_VALUE
------------
24.10.2013 1
25.10.2013 1

SQL>
Читать полностью

,

ORA-01031: insufficient privileges (when logging in as sqlplus "/ as sysdba")

Error: ORA-01031: insufficient privileges (when logging in as sqlplus "/ as sysdba")

Cause: User does not have sufficient rights.

Action:


C:\sqlplus /nolog

SQL*Plus: Release 11.2.0.3.0 Production on Thu Oct 24 14:44:37 2013

Copyright (c) 1982, 2011, Oracle. All rights reserved.

SQL> conn / as sysdba
ERROR:
ORA-01031: insufficient privileges

SQL> exit

Check if the OS user you are logged on to the machine is member of the ORA_DBA group. Add the user to the ORA_DBA group.



C:\sqlplus /nolog

SQL*Plus: Release 11.2.0.3.0 Production on Thu Oct 24 14:45:03 2013

Copyright (c) 1982, 2011, Oracle. All rights reserved.

SQL> conn / as sysdba
Connected.

See also ORA-01031: insufficient privileges (when the inline view is updated)

Читать полностью

,

ORA-01031: insufficient privileges (when the inline view is updated)

пятница, 18 октября 2013 г. 0 коммент.

Error: ORA-01031: insufficient privileges (when the inline view is updated).

Cause: This user has no right to "update" on the table joined in the select.

Action:

I got an error when the script is executed.


update (
select emp.comm
from scott.dept dept, scott.emp
where dept.loc = 'CHICAGO' and
emp.deptno = dept.deptno
)
set comm = comm + 100;

User "test" had the following rights:


grant select, update on emp to test;
grant select on dept to test;

To execute update user "test" must have the right to "update" to all fields of tables joined in the select.

grant update (deptno, loc) on dept to test;

Or the entire table:

grant update on dept to test;

UPD: ORA-01031: insufficient privileges (when logging in as sqlplus "/ as sysdba")

Читать полностью

,

ORA-28000: the account is locked

четверг, 17 октября 2013 г. 0 коммент.

Error:

C:\>sqlplus /nolog

SQL*Plus: Release 11.2.0.1.0 Production on Thu Oct 17 17:55:05 2013

Copyright (c) 1982, 2010, Oracle. All rights reserved.

SQL> conn SCOTT/TIGER@SIEB306
ERROR:
ORA-28000: the account is locked

SQL> exit

Cause: The user has entered wrong password consequently for maximum number of times specified by the user's profile parameter FAILED_LOGIN_ATTEMPTS, or the DBA has locked the account


Action: Wait for PASSWORD_LOCK_TIME or Unlock the account.



SQL>
SQL> select dba_users.username, dba_users.account_status
2 from dba_users
3 where dba_users.username = 'SCOTT';

USERNAME ACCOUNT_STATUS
------------------------------ --------------------------------
SCOTT EXPIRED & LOCKED

SQL> ALTER USER SCOTT ACCOUNT UNLOCK;

User altered

SQL>
SQL> select dba_users.username, dba_users.account_status
2 from dba_users
3 where dba_users.username = 'SCOTT';

USERNAME ACCOUNT_STATUS
------------------------------ --------------------------------
SCOTT EXPIRED

SQL>

See also: ORA-28001: the password has expired


And prevent account locks to not occur again:


sql> ALTER PROFILE "DEFAULT" LIMIT PASSWORD_LIFE_TIME UNLIMITED;
sql> ALTER PROFILE "DEFAULT" LIMIT FAILED_LOGIN_ATTEMPTS UNLIMITED;
Читать полностью

"Process J001 died" And/Or "Process W000 died" And/Or "kkjcre1p: unable to spawn jobq slave process"

пятница, 11 октября 2013 г. 0 коммент.

I got an error:


Process J001 died, see its trace file
kkjcre1p: unable to spawn jobq slave process
Process W000 died, see its trace file

Cause: A resource problem


Action:

1. Increase the parameter JOB_QUEUE_PROCESSES. Using Both DBMS_JOB and Oracle Scheduler

alter system set job_queue_processes=10;

2. Make sure Swap is correctly configured and increase it if necessary.

    4 ~ 8 GBs of RAM, set Swap to at least double RAM size.

    8~16 GBs of RAM, set Swap to double RAM.

    16~32 GBs of RAM or more, set Swap at a rate of 0.75xRAM.

3. Make sure all the Kernel parameters are configured correctly.

4. Check In-Memory File System size. ORA-00845 MEMORY_TARGET not supported on this system

4. Check the ulimits.


Читать полностью

, ,

Permission denied when loading data into hive table

вторник, 8 октября 2013 г. 0 коммент.

I got an error:


[root ~]# sudo -u hive hive
hive> LOAD DATA INPATH '/user/hdfs/d_product_type_hive_exp.csv' OVERWRITE INTO TABLE D_PRODUCT_TYPE_HIVE;
Loading data to table default.d_product_type_hive
rmr: DEPRECATED: Please use 'rm -r' instead.
Moved: 'hdfs://xxxxx:8020/user/hive/warehouse/d_product_type_hive' to trash at: hdfs://xxxxx:8020/user/hive/.Trash/Current
Failed with exception Permission denied: user=hive, access=WRITE, inode="/user/hdfs":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkParentAccess(FSNamesystem.java:4666)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameToInternal(FSNamesystem.java:2698)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameToInt(FSNamesystem.java:2666)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2645)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:589)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:381)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44964)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask
hive>

Cause: Command "LOAD DATA"(without LOCAL) moves the data to the final location and the user hive don't have the rights to delete the file.
Action: Give the right to "write" on the directory with the file.


[root ~]# sudo -u hdfs hadoop fs -ls -d /user/hdfs/
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-10-08 18:28 /user/hdfs
[root ~]# sudo -u hdfs hadoop fs -chmod o+w /user/hdfs/
[root ~]# sudo -u hdfs hadoop fs -ls -d /user/hdfs/
Found 1 items
drwxr-xrwx - hdfs supergroup 0 2013-10-08 18:28 /user/hdfs

Try again:


[root ~]# sudo -u hive hive
hive> LOAD DATA INPATH '/user/hdfs/d_product_type_hive_exp.csv' OVERWRITE INTO TABLE D_PRODUCT_TYPE_HIVE;
Loading data to table default.d_product_type_hive
rmr: DEPRECATED: Please use 'rm -r' instead.
Moved: 'hdfs://xxxxx:8020/user/hive/warehouse/d_product_type_hive' to trash at: hdfs://xxxxx:8020/user/hive/.Trash/Current
Table default.d_product_type_hive stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 354, raw_data_size: 0]
OK
Time taken: 5.638 seconds
hive>
Читать полностью

,

Hive: mismatched input '<EOF>' expecting StringLiteral near 'BY' in table row format's field separator

понедельник, 7 октября 2013 г. 3 коммент.

I got an error “FAILED: ParseException line 1:155 mismatched input '<EOF>' expecting StringLiteral near 'BY' in table row format's field separator” when created table in hive.


hive> CREATE TABLE IF NOT EXISTS D_PRODUCT_TYPE_HIVE (PROD_TYPE_ID DOUBLE, PROD_TYPE_NAME STRING, DESCRIPTION STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ';' LINES TERMINATED BY '\n';
FAILED: ParseException line 1:155 mismatched input '< EOF>' expecting StringLiteral near 'BY' in table row format's field separator
hive>

Semi-colon is reserved character. It needs to be escaped, '\;' or octal code '\073', but not hex code '\03B'.



hive> CREATE TABLE IF NOT EXISTS D_PRODUCT_TYPE_HIVE (PROD_TYPE_ID DOUBLE, PROD_TYPE_NAME STRING, DESCRIPTION STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\;' LINES TERMINATED BY '\n';
OK
Time taken: 0.308 seconds
hive>
Читать полностью

, ,

ORA-00845 MEMORY_TARGET not supported on this system

четверг, 3 октября 2013 г. 0 коммент.

Error:


SQL> conn / as sysdba
Connected to an idle instance.
SQL> shutdown immediate;
ORA-01034: ORACLE not available
ORA-27102: out of memory
Linux-x86_64 Error: 12: Cannot allocate memory
Additional information: 1
Additional information: 3407880
Additional information: 8
SQL> startup;
ORA-00845: MEMORY_TARGET not supported on this system

Cause: The MEMORY_TARGET parameter was not supported on this operating system or /dev/shm was not sized correctly on Linux(the size of /dev/shm needs to be greater than MEMORY_TARGET or MEMMORY_MAX_TARGET).


Action: Check the size of /dev/shm and increase the size of /dev/shm


Check /dev/shm:

[root]# df -h /dev/shm
Filesystem Size Used Avail Use% Mounted on
tmpfs 2.0G 1.2G 0.8G 60% /dev/shm

Change your system file /etc/fstab:

[root]# vi /etc/fstab
tmpfs /dev/shm tmpfs defaults,size=3G 0 0

And remount:

[root]# mount -o remount /dev/shm

Chech changes:

[root]# df -h /dev/shm
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.0G 1.2G 1.9G 40% /dev/shm

Startup database:


SQL> conn / as sysdba;
Connected to an idle instance.
SQL> startup;
ORACLE instance started.

Total System Global Area 1937457152 bytes
Fixed Size 2214576 bytes
Variable Size 1476396368 bytes
Database Buffers 452984832 bytes
Redo Buffers 5861376 bytes
Database mounted.
Database opened.
SQL>
Читать полностью

, ,

Decrease HWM (high watermark) of a tablespace

пятница, 27 сентября 2013 г. 0 коммент.

Script decreases HWM by moving objects within a database tablespace.

  • set your tablespace, which will be used to transfer;
  • set your file_id, which will reduce;
  • set maximum number of iterations;

declare
-- set your tablespace, which will be used to transfer
vts varchar2(30) := 'users';
-- set your file_id, which will reduce
vfile_id pls_integer := 5;
-- set maximum number of iterations
vi_max pls_integer := 10;

vsql varchar2(32000);
vsqlind varchar2(32000);
vsql_prev varchar(32000) := null;
vi pls_integer := 0;
vlob dba_lobs%rowtype;

procedure put_line(vs varchar2)
is
begin
dbms_output.put_line(to_char(sysdate, 'yyyy.mm.dd hh24:mi:ss') || ' ' || vs);
end put_line;

procedure saving(vfile_id pls_integer)
is
vsv varchar2(2000);
begin
select 'Saving space = ' ||
to_char(ceil(dba_data_files.blocks * c.db_block_size/1024/1024) - ceil((nvl(hwm, 1) * c.db_block_size)/1024/1024)) ||
'Mb from ' || ceil(d.bytes/1024/1024) || ' Mb'
into vsv
from dba_data_files,
(select file_id,
max(block_id + blocks - 1) as hwm
from dba_extents
where dba_extents.file_id = vfile_id
group by dba_extents.file_id
) b,
(select value db_block_size from v$parameter where name = 'db_block_size') c,
(
select dba_free_space.file_id,
sum(dba_free_space.bytes) as bytes
from dba_free_space
where dba_free_space.file_id = vfile_id
group by dba_free_space.file_id
) d
where dba_data_files.file_id = b.file_id and
dba_data_files.file_id = d.file_id(+) and
dba_data_files.file_id = vfile_id;

put_line(vsv);
end saving;

begin
saving(vfile_id);

<>
loop
vi := vi + 1;
dbms_output.put_line(vi);

<>
for vcur in (
select dba_extents.*
from dba_extents
where dba_extents.file_id = vfile_id and
dba_extents.block_id = (select max(dba_extents.block_id) from dba_extents where dba_extents.file_id = vfile_id)
) loop

if (vcur.segment_type = 'TABLE PARTITION') then
vsql := 'alter table ' || vcur.owner || '.' || vcur.segment_name || ' move partition ' || vcur.partition_name || ' tablespace ' || vts;
exit first_loop when vsql = vsql_prev or vi > vi_max;
vsql_prev := vsql;

put_line(vsql);
execute immediate vsql;

vsql := 'alter table ' || vcur.owner || '.' || vcur.segment_name || ' move partition ' || vcur.partition_name || ' tablespace ' || vcur.tablespace_name || ' update indexes';
put_line(vsql);
execute immediate vsql;
elsif (vcur.segment_type = 'TABLE SUBPARTITION') then
vsql := 'alter table ' || vcur.owner || '.' || vcur.segment_name || ' move subpartition ' || vcur.partition_name || ' tablespace ' || vts;
exit first_loop when vsql = vsql_prev or vi > vi_max;
vsql_prev := vsql;

put_line(vsql);
execute immediate vsql;

vsql := 'alter table ' || vcur.owner || '.' || vcur.segment_name || ' move subpartition ' || vcur.partition_name || ' tablespace ' || vcur.tablespace_name || ' update indexes';
put_line(vsql);
execute immediate vsql;
elsif (vcur.segment_type = 'TABLE') then
vsql := 'alter table ' || vcur.owner || '.' || vcur.segment_name || ' move tablespace ' || vts;
exit first_loop when vsql = vsql_prev or vi > vi_max;
vsql_prev := vsql;

put_line(vsql);
execute immediate vsql;

vsql := 'alter table ' || vcur.owner || '.' || vcur.segment_name || ' move tablespace ' || vcur.tablespace_name;
put_line(vsql);
execute immediate vsql;

-- rebuild index
for vcurind in (
select all_indexes.owner, all_indexes.index_name
from all_indexes
where all_indexes.table_owner = vcur.owner and
all_indexes.status = 'UNUSABLE' and
all_indexes.table_name = vcur.segment_name
order by all_indexes.table_name, all_indexes.index_name
) loop

vsqlind := 'alter index ' || vcurind.owner || '.' || vcurind.index_name || ' rebuild';
execute immediate vsqlind;
end loop;

elsif (vcur.segment_type = 'INDEX') then
vsql := 'alter index ' || vcur.owner || '.' || vcur.segment_name || ' rebuild tablespace ' || vts;
exit first_loop when vsql = vsql_prev or vi > vi_max;
vsql_prev := vsql;

put_line(vsql);
execute immediate vsql;

vsql := 'alter index ' || vcur.owner || '.' || vcur.segment_name || ' rebuild tablespace ' || vcur.tablespace_name;
put_line(vsql);
execute immediate vsql;
elsif (vcur.segment_type = 'INDEX PARTITION') then
vsql := 'alter index ' || vcur.owner || '.' || vcur.segment_name || ' rebuild partition ' || vcur.partition_name ||' tablespace ' || vts;
exit first_loop when vsql = vsql_prev or vi > vi_max;
vsql_prev := vsql;

put_line(vsql);
execute immediate vsql;

vsql := 'alter index ' || vcur.owner || '.' || vcur.segment_name || ' rebuild partition ' || vcur.partition_name ||' tablespace ' || vcur.tablespace_name;
put_line(vsql);
execute immediate vsql;
elsif (vcur.segment_type = 'INDEX SUBPARTITION') then
vsql := 'alter index ' || vcur.owner || '.' || vcur.segment_name || ' rebuild subpartition ' || vcur.partition_name ||' tablespace ' || vts;
exit first_loop when vsql = vsql_prev or vi > vi_max;
vsql_prev := vsql;

put_line(vsql);
execute immediate vsql;

vsql := 'alter index ' || vcur.owner || '.' || vcur.segment_name || ' rebuild subpartition ' || vcur.partition_name ||' tablespace ' || vcur.tablespace_name;
put_line(vsql);
execute immediate vsql;
elsif (vcur.segment_type = 'LOBSEGMENT') then
select dba_lobs.*
into vlob
from dba_lobs
where dba_lobs.owner = vcur.owner and
dba_lobs.segment_name = vcur.segment_name;

vsql := 'alter table ' || vlob.owner || '.' || vlob.table_name || ' move lob (' || vlob.column_name ||') store as (tablespace ' || vts || ')';
exit first_loop when vsql = vsql_prev or vi > vi_max;
vsql_prev := vsql;

put_line(vsql);
execute immediate vsql;

vsql := 'alter table ' || vlob.owner || '.' || vlob.table_name || ' move lob (' || vlob.column_name ||') store as (tablespace ' || vcur.tablespace_name || ')';
put_line(vsql);
execute immediate vsql;
elsif (vcur.segment_type = 'LOBINDEX') then
select dba_lobs.*
into vlob
from dba_lobs
where dba_lobs.owner = vcur.owner and
dba_lobs.index_name = vcur.segment_name;

vsql := 'alter table ' || vlob.owner || '.' || vlob.table_name || ' move lob (' || vlob.column_name ||') store as (tablespace ' || vts || ')';
exit first_loop when vsql = vsql_prev or vi > vi_max;
vsql_prev := vsql;

put_line(vsql);
execute immediate vsql;

vsql := 'alter table ' || vlob.owner || '.' || vlob.table_name || ' move lob (' || vlob.column_name ||') store as (tablespace ' || vcur.tablespace_name || ')';
put_line(vsql);
execute immediate vsql;
else
exit first_loop;
end if;
end loop second_loop;

end loop first_loop;

saving(vfile_id);
end;


Result:



2013.09.26 16:24:07 Saving space = 18Mb from 22 Mb
1
2013.09.26 16:24:16 alter index SH.FW_PSC_S_MV_WD_BIX rebuild tablespace users
2013.09.26 16:24:17 alter index SH.FW_PSC_S_MV_WD_BIX rebuild tablespace EXAMPLE
2
2013.09.26 16:24:26 alter index SH.FW_PSC_S_MV_PROMO_BIX rebuild tablespace users
2013.09.26 16:24:26 alter index SH.FW_PSC_S_MV_PROMO_BIX rebuild tablespace EXAMPLE
3
2013.09.26 16:24:36 alter index SH.FW_PSC_S_MV_CHAN_BIX rebuild tablespace users
2013.09.26 16:24:36 alter index SH.FW_PSC_S_MV_CHAN_BIX rebuild tablespace EXAMPLE
4
2013.09.26 16:24:45 alter index SH.FW_PSC_S_MV_SUBCAT_BIX rebuild tablespace users
2013.09.26 16:24:45 alter index SH.FW_PSC_S_MV_SUBCAT_BIX rebuild tablespace EXAMPLE
5
2013.09.26 16:24:55 alter table SH.FWEEK_PSCAT_SALES_MV move tablespace users
2013.09.26 16:24:55 alter table SH.FWEEK_PSCAT_SALES_MV move tablespace EXAMPLE
6
2013.09.26 16:25:05 alter table SH.CAL_MONTH_SALES_MV move tablespace users
2013.09.26 16:25:05 alter table SH.CAL_MONTH_SALES_MV move tablespace EXAMPLE
7
2013.09.26 16:25:14 alter index SH.CUSTOMERS_YOB_BIX rebuild tablespace users
2013.09.26 16:25:14 alter index SH.CUSTOMERS_YOB_BIX rebuild tablespace EXAMPLE
8
2013.09.26 16:25:23 alter index SH.CUSTOMERS_MARITAL_BIX rebuild tablespace users
2013.09.26 16:25:23 alter index SH.CUSTOMERS_MARITAL_BIX rebuild tablespace EXAMPLE
9
2013.09.26 16:25:33 alter index SH.CUSTOMERS_GENDER_BIX rebuild tablespace users
2013.09.26 16:25:33 alter index SH.CUSTOMERS_GENDER_BIX rebuild tablespace EXAMPLE
10
2013.09.26 16:25:42 alter index SH.PRODUCTS_PROD_CAT_IX rebuild tablespace users
2013.09.26 16:25:42 alter index SH.PRODUCTS_PROD_CAT_IX rebuild tablespace EXAMPLE
11
2013.09.26 16:25:52 Saving space = 20Mb from 22 Mb
Читать полностью

,

ORA-28001: the password has expired

четверг, 26 сентября 2013 г. 0 коммент.

Cause: The user's account has expired and the password needs to be changed.

Action: Change the password.


SQL> select dba_users.account_status
2 from dba_users
3 where dba_users.username = 'INFD_T';

ACCOUNT_STATUS
--------------------------------
EXPIRED

SQL> ALTER USER INFD_T IDENTIFIED BY INFD_T;

User altered

SQL>
SQL> select dba_users.account_status
2 from dba_users
3 where dba_users.username = 'INFD_T';

ACCOUNT_STATUS
--------------------------------
OPEN

SQL>

Other methods:


ALTER USER < USERNAME > IDENTIFIED BY < PASSWORD >;
ALTER USER < USERNAME > IDENTIFIED BY VALUES < PASSWORD_HASH >;
Читать полностью

,

HDFS: Failed to move to trash

понедельник, 24 июня 2013 г. 0 коммент.

I got an error:

[root@vm1 tmp]# hadoop fs -rmr /tmp/my_dir
13/06/21 17:51:40 WARN fs.TrashPolicyDefault: Can't create trash directory: hdfs://vm2:8020/user/root/.Trash/Current/tmp
rmr: Failed to move to trash: hdfs://vm2:8020/tmp/my_dir. Consider using -skipTrash option

I checked if there is a directory "/user/root" and tried to create one.

[root@vm1 tmp]# hadoop fs -ls /
Found 2 items
drwxrwxrwt - hdfs supergroup 0 2013-06-21 17:50 /tmp
drwxr-xr-x - hdfs supergroup 0 2013-06-20 14:25 /user
[root@vm1 tmp]# hadoop fs -ls /user
Found 2 items
drwxrwxr-t - hive hive 0 2013-06-20 14:24 /user/hive
drwxrwxr-x - oozie oozie 0 2013-06-20 14:25 /user/oozie
[root@vm1 tmp]# hadoop fs -mkdir /user/root
mkdir: Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x

The owner of the directory "/ user" is hdfs:supergroup.
Let's create a home directory for the root.

[root@v1 tmp]# sudo -u hdfs hadoop fs -mkdir /user/root 
[root@v1 tmp]# sudo -u hdfs hadoop fs -chown root:root /user/root

Check now delete the file.

[root@v1 tmp]# hadoop fs -rm -r /tmp/my_dir
Moved: 'hdfs://vm2:8020/tmp/my_dir' to trash at: hdfs://vm2:8020/user/root/.Trash/Current

Читать полностью

Split comma separated values to columns

понедельник, 17 июня 2013 г. 0 коммент.


Another way to split comma separated values to columns.


with t_string as
(
select 'John,Roy,Alice,Helen,Mark,Barbara,Elizabeth,Lara' as str
from dual
)
select regexp_substr(t_string.str,'[^,]+', 1, level)
from t_string
connect by level <= regexp_count(t_string.str,'[^,]+')

Other methods:


Результат Select'а в виде одной строки и одна строка в виде набора строк


Читать полностью

,

KUP-04026: field too long for datatype

пятница, 14 июня 2013 г. 0 коммент.


When reading from the external table I get an error.


KUP-04021: field formatting error for field COL06
KUP-04026: field too long for datatype
KUP-04101: record 8 rejected in file /u01/app/oracle/admin/orcl/dpdump/data01.csv

Here's my current table definition:


create table t_data01 (
col01 varchar2(100 char),
col02 varchar2(100 char),
col03 varchar2(30 char),
col04 varchar2(30 char),
col05 varchar2(30 char),
col06 varchar2(500 char),
col07 varchar2(50 char),
col08 varchar2(30 char),
col09 date,
col10 date,
col11 date,
col12 varchar2(50 char)
)
organization external (
type oracle_loader
default directory data_pump_dir
access parameters (
records delimited by 0x'0d' characterset cl8mswin1251
badfile data_pump_dir:'t_data01.bad'
logfile data_pump_dir:'t_data01.log'
discardfile data_pump_dir:'t_data01.dsc'
fields terminated by '¤'
missing field values are null
reject rows with all null fields
(
col01, col02, col03, col04, col05,
col06, col07, col08, col09 date 'dd.mm.rr', col10 date 'dd.mm.rr',
col11 date 'dd.mm.rr', col12
)
)
location ('data01.csv')
)
reject limit unlimited;

In the description of the table for the column "col06" type is set to "varchar2 (500 char)". But ORACLE_LOADER uses the type and size of the default, "COL06 CHAR(255). This can be seen in the log file.



    COL06                           CHAR (255)
Terminated by "В¤"
Trim whitespace same as SQL Loader


In setting for ORACLE_LOADER must specify the size of the column.


-- ...
(
col01, col02, col03, col04, col05,
col06 char(500), col07, col08, col09 date 'dd.mm.rr', col10 date 'dd.mm.rr',
col11 date 'dd.mm.rr', col12
)
-- ...
Читать полностью