,

HDFS: Failed to move to trash

понедельник, 24 июня 2013 г. 0 коммент.

I got an error:

[root@vm1 tmp]# hadoop fs -rmr /tmp/my_dir
13/06/21 17:51:40 WARN fs.TrashPolicyDefault: Can't create trash directory: hdfs://vm2:8020/user/root/.Trash/Current/tmp
rmr: Failed to move to trash: hdfs://vm2:8020/tmp/my_dir. Consider using -skipTrash option

I checked if there is a directory "/user/root" and tried to create one.

[root@vm1 tmp]# hadoop fs -ls /
Found 2 items
drwxrwxrwt - hdfs supergroup 0 2013-06-21 17:50 /tmp
drwxr-xr-x - hdfs supergroup 0 2013-06-20 14:25 /user
[root@vm1 tmp]# hadoop fs -ls /user
Found 2 items
drwxrwxr-t - hive hive 0 2013-06-20 14:24 /user/hive
drwxrwxr-x - oozie oozie 0 2013-06-20 14:25 /user/oozie
[root@vm1 tmp]# hadoop fs -mkdir /user/root
mkdir: Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x

The owner of the directory "/ user" is hdfs:supergroup.
Let's create a home directory for the root.

[root@v1 tmp]# sudo -u hdfs hadoop fs -mkdir /user/root 
[root@v1 tmp]# sudo -u hdfs hadoop fs -chown root:root /user/root

Check now delete the file.

[root@v1 tmp]# hadoop fs -rm -r /tmp/my_dir
Moved: 'hdfs://vm2:8020/tmp/my_dir' to trash at: hdfs://vm2:8020/user/root/.Trash/Current

Читать полностью

Split comma separated values to columns

понедельник, 17 июня 2013 г. 0 коммент.


Another way to split comma separated values to columns.


with t_string as
(
select 'John,Roy,Alice,Helen,Mark,Barbara,Elizabeth,Lara' as str
from dual
)
select regexp_substr(t_string.str,'[^,]+', 1, level)
from t_string
connect by level <= regexp_count(t_string.str,'[^,]+')

Other methods:


Результат Select'а в виде одной строки и одна строка в виде набора строк


Читать полностью

,

KUP-04026: field too long for datatype

пятница, 14 июня 2013 г. 2 коммент.


When reading from the external table I get an error.


KUP-04021: field formatting error for field COL06
KUP-04026: field too long for datatype
KUP-04101: record 8 rejected in file /u01/app/oracle/admin/orcl/dpdump/data01.csv

Here's my current table definition:


create table t_data01 (
col01 varchar2(100 char),
col02 varchar2(100 char),
col03 varchar2(30 char),
col04 varchar2(30 char),
col05 varchar2(30 char),
col06 varchar2(500 char),
col07 varchar2(50 char),
col08 varchar2(30 char),
col09 date,
col10 date,
col11 date,
col12 varchar2(50 char)
)
organization external (
type oracle_loader
default directory data_pump_dir
access parameters (
records delimited by 0x'0d' characterset cl8mswin1251
badfile data_pump_dir:'t_data01.bad'
logfile data_pump_dir:'t_data01.log'
discardfile data_pump_dir:'t_data01.dsc'
fields terminated by '¤'
missing field values are null
reject rows with all null fields
(
col01, col02, col03, col04, col05,
col06, col07, col08, col09 date 'dd.mm.rr', col10 date 'dd.mm.rr',
col11 date 'dd.mm.rr', col12
)
)
location ('data01.csv')
)
reject limit unlimited;

In the description of the table for the column "col06" type is set to "varchar2 (500 char)". But ORACLE_LOADER uses the type and size of the default, "COL06 CHAR(255). This can be seen in the log file.



    COL06                           CHAR (255)
Terminated by "В¤"
Trim whitespace same as SQL Loader


In setting for ORACLE_LOADER must specify the size of the column.


-- ...
(
col01, col02, col03, col04, col05,
col06 char(500), col07, col08, col09 date 'dd.mm.rr', col10 date 'dd.mm.rr',
col11 date 'dd.mm.rr', col12
)
-- ...
Читать полностью