Problema con import y tablespace

Miki
28 de Octubre del 2004
Hola.

Tengo un servidor Oracle 7 en un Windows NT server 4.0 y un archivo dmp que contiene la base de datos exportada de un equipo con el mismo Oracle y Sistema Operativo, pero diferente Hardware.

Cuando intento importar el archivo de la base de datos (el export fue un Full Database), me da fallos con los Tablespaces de forma que tengo que crearlos a mano. Me deja crear todos bien (menos los que crea Oracle por defecto) salvo uno, que me dice que el tamaño no es válido. El tablespace en cuestión tiene algo más de 2 GB, pero sin él no puedo importar toda la base de datos y casualmente es el que tiene los datos más importantes, ¿cómo lo hago?

Para crear el archivo export usé el Data Manager del Enterprise Manager del Oracle y para importarlo en la nueva máquina estoy usando exactamente el mismo programa, ¿porqué no funciona?

Hace tiempo logré importar la base de datos a medias y al conectar con el programa cliente, se podían leer los datos pero no se podían añadir registros, ¿puede ser porque el tablespace de 2GB esté al límite?

Pensé en crear ese tablespace con un tamaño menor y que fuese aumentando el tamaño durante el import, pero no lo hace y temo que eso me haga perder datos, cosa que no puedo permitirme. Y, para bien, necesitaría que la base de datos quedase exactamente igual que en el equipo original.

Gracias

Rosendo
28 de Octubre del 2004
No creo que temga que ver con el limite de dos gigas.
De todas formas crea un tbl con varios archivos de 2Gb.
Pruebalo y ya me dices.
Rosendo.
PD:Si no nos envias los errores de la bbdd no se por donde orientarte.

Miki
28 de Octubre del 2004
Hola de nuevo.

La forma de crear el tablespace es esta: CREATE TABLESPACE "nombre_tablespace" DATAFILE 'path_absoluto/fichero' SIZE 3145728000 DEFAULT STORAGE (INITIAL 10240 NEXT 10240 MINEXTENTS 1 MAXEXTENTS 121 PCTINCREASE 50) ONLINE PERMANENT;

Esto me devuelve el error "Ora-02237: tamaño de fichero no válido"

El caso es que creo otro tablespace con la misma fórmula pero con un size menor (aproximadamente un Giga) y ese me le crea sin problemas. Por si era fallo del tamaño, probé a crear el tablespace de 2Gb con un tamaño menor, esperando que durante el import se fuese ampliando automáticamente, pero me dice que no se puede añadir más tamaño al tablespace por lo que dudo que me copie bien todas las tablas ya que el tamaño final del tablespace es menor de lo que debería (al menos doy por hecho que si mide 2GB es porque tiene muchos datos, pero realmente no lo sé porque la base de datos no es de mi empresa).

Como ya digo, si creo el tablespace con un tamaño menor y ejecuto el programa Data Manager y utilizo el import, se pone a importar las tablas y los usuarios pero una vez que ha terminado, al querer cargar datos me dice que la "Tabla o vista no existe" con el error "ORA-00942: Table or view does not exist", pero ejecutando una sentencia select sobre la tabla, la tabla sí aparece.

A veces el import después de crear una tabla indica el mensaje "..skipping table "nombre_tabla"", ¿qué significa eso? ¿No se están cargando datos en la tabla?

Gracias

Manny
28 de Octubre del 2004
Hi, Miki.

Si no recurerdo mal, para la release de oracle7 existen limites de creacion de espacios para las tablespace, puedes probar a crear una menor y despues incrementar con otro datafiles. Ej.

SQL> CREATE TABLESPACE MANNY 'path_absoluto/fichero' SIZE 1000M;
SQL> ALTER TABLESPACE MANNY ADD DATAFILE
'<full path of the datafile>' SIZE 1000M;
SQL>ALTER TABLESPACE MANNY ADD DATAFILE
'<full path of the datafile>' SIZE 1000M;

De esta manera puede que la tablespace llega a 3 G. de espacio disponible. Verifica en:

SQL>select sum(bytes/1024/1024) from dba_data_files
where tablespace_name='MANNY';

En este orden de ideas, para que una tablespace puede crecer de manera automatica debe haber el AUTOEXTENT ON.

Puerba a realizar estas operaciones.
Manny

NOTA: Esto tipo de situaciones me han sucedido en
ambiente UNIX ya que existe limit para crear del files
su filesystem, pero esto no es tu caso.

bye.

pepillo grillo
28 de Octubre del 2004
metete el tablespace por ojete , que seguro que ahi tienes sitio. no ves que roco sifredi te raja el ojete cada noche, y te lo tiene echo una cueva. espero que la respuesta te sirva.

Manny
28 de Octubre del 2004
Hi Miki,
Vedi queste documentos allegados;
Puede ser un problema de BUG - or del db_block_size.


Introduction

~~~~~~~~~~~~
This article describes "2Gb" issues. It gives information on why 2Gb is a magical number and outlines the issues you need to know about if you are considering using Oracle with files larger than 2Gb in size. It also looks at some other file related limits and issues. The article has a Unix bias as this is where most of the 2Gb issues arise but there is information relevant to other (non-unix) platforms. Articles giving port specific limits are listed in the last section. Topics covered include: Why is 2Gb a Special Number ? Why use 2Gb+ Datafiles ? Export and 2Gb SQL*Loader and 2Gb Oracle and other 2Gb issues Port Specific Information on "Large Files" Why is 2Gb a Special Number ? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Many CPU's and system call interfaces (API's) in use today use a word size of 32 bits. This word size imposes limits on many operations. In many cases the standard API's for file operations use a 32-bit signed word to represent both file size and current position within a file (byte displacement). A 'signed' 32bit word uses the top most bit as a sign indicator leaving only 31 bits to represent the actual value (positive or negative). In hexadecimal the largest positive number that can be represented in in 31 bits is 0x7FFFFFFF , which is +2147483647 decimal. This is ONE less than 2Gb. Files of 2Gb or more are generally known as 'large files'. As one might expect problems can start to surface once you try to use the number 2147483648 or higher in a 32bit environment. To overcome this problem recent versions of operating systems have defined new system calls which typically use 64-bit addressing for file sizes and offsets. Recent Oracle releases make use of these new interfaces but there are a number of issues one should be aware of before deciding to use 'large files'. Another "special" number is 4Gb. 0xFFFFFFFF in hexadecimal can be interpreted as an UNSIGNED value (4294967295 decimal) which is one less than 4Gb. Adding one to this value yields 0x00000000 in the low order 4 bytes with a '1' carried over. The carried over bit is lost when using 32bit arithmetic. Hence 4Gb is another "special" number where problems may occur. Such issues are also mentioned in this article. What does this mean when using Oracle ? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The 32bit issue affects Oracle in a number of ways. In order to use large files you need to have: 1. An operating system that supports 2Gb+ files or raw devices 2. An operating system which has an API to support I/O on 2Gb+ files 3. A version of Oracle which uses this API Today most platforms support large files and have 64bit APIs for such files. Releases of Oracle from 7.3 onwards usually make use of these 64bit APIs but the situation is very dependent on platform, operating system version and the Oracle version. In some cases 'large file' support is present by default, while in other cases a special patch may be required. At the time of writing there are some tools within Oracle which have not been updated to use the new API's, most notably tools like EXPORT and SQL*LOADER, but again the exact situation is platform and version specific. Why use 2Gb+ Datafiles ? ~~~~~~~~~~~~~~~~~~~~~~~~ In this section we will try to summarise the advantages and disadvantages of using "large" files / devices for Oracle datafiles: Advantages of files larger than 2Gb: On most platforms Oracle7 supports up to 1022 datafiles. With files < 2Gb this limits the database size to less than 2044Gb. This is not an issue with Oracle8 which supports many more files. (Oracle8 supported 1022 files PER TABLESPACE). In reality the maximum database size in Oracle7 would be less than 2044Gb due to maintaining separate data in separate tablespaces. Some of these may be much less than 2Gb in size. Larger files allow this 2044Gb limit to be exceeded. Larger files can mean less files to manage for smaller databases. Less file handle resources required. Disadvantages of files larger than 2Gb: The unit of recovery is larger. A 2Gb file may take between 15 minutes and 1 hour to backup / restore depending on the backup media and disk speeds. An 8Gb file may take 4 times as long. Parallelism of backup / recovery operations may be impacted. There may be platform specific limitations - Eg: Asynchronous IO operations may be serialised above the 2Gb mark. As handling of files above 2Gb may need patches, special configuration etc.. there is an increased risk involved as opposed to smaller files. Eg: On certain AIX releases Asynchronous IO serialises above 2Gb. Important points if using files >= 2Gb Check with the OS Vendor to determine if large files are supported and how to configure for them. Check with the OS Vendor what the maximum file size actually is. Check with Oracle support if any patches or limitations apply on your platform , OS version and Oracle version. Remember to check again if you are considering upgrading either Oracle or the OS in case any patches are required in the release you are moving to. Make sure any operating system limits are set correctly to allow access to large files for all users. Make sure any backup scripts can also cope with large files. Note that there is still a limit to the maximum file size you can use for datafiles above 2Gb in size. The exact limit depends on the DB_BLOCK_SIZE of the database and the platform. On most platforms (Unix, NT, VMS) the limit on file size is around 4194302*DB_BLOCK_SIZE. See the details in the Alert in [NOTE:112011.1] which describes problems with resizing files, especially to above 2Gb in size. Important notes generally Be careful when allowing files to automatically resize. It is sensible to always limit the MAXSIZE for AUTOEXTEND files to less than 2Gb if not using 'large files', and to a sensible limit otherwise. Note that due to [BUG:568232] it is possible to specify an value of MAXSIZE larger than Oracle can cope with which may result in internal errors after the resize occurs. (Errors typically include ORA-600 [3292]) On many platforms Oracle datafiles have an additional header block at the start of the file so creating a file of 2Gb actually requires slightly more than 2Gb of disk space. On Unix platforms the additional header for datafiles is usually DB_BLOCK_SIZE bytes but may be larger when creating datafiles on raw devices. 2Gb related Oracle Errors: These are a few of the errors which may occur when a 2Gb limit is present. They are not in any particular order. ORA-01119 Error in creating datafile xxxx ORA-27044 unable to write header block of file SVR4 Error: 22: Invalid argument ORA-19502 write error on file 'filename', blockno x (blocksize=nn) ORA-27070 skgfdisp: async read/write failed ORA-02237 invalid file size KCF:write/open error dba=xxxxxx block=xxxx online=xxxx file=xxxxxxxx file limit exceed. Unix error 27, EFBIG Export and 2Gb ~~~~~~~~~~~~~~ 2Gb Export File Size ~~~~~~~~~~~~~~~~~~~~ At the time of writing most versions of export use the default file open API when creating an export file. This means that on many platforms it is impossible to export a file of 2Gb or larger to a file system file. There are several options available to overcome 2Gb file limits with export such as: - It is generally possible to write an export > 2Gb to a raw device. Obviously the raw device has to be large enough to fit the entire export into it. - By exporting to a named pipe (on Unix) one can compress, zip or split up the output. See: "Quick Reference to Exporting >2Gb on Unix" [NOTE:30528.1] - One can export to tape (on most platforms) See "Exporting to tape on Unix systems" [NOTE:30428.1] (This article also describes in detail how to export to a unix pipe, remote shell etc..) - Oracle8i allows you to write an export to multiple export files rather than to one large export file. Other 2Gb Export Issues ~~~~~~~~~~~~~~~~~~~~~~~ Oracle has a maximum extent size of 2Gb. Unfortunately there is a problem with EXPORT on many releases of Oracle such that if you export a large table and specify COMPRESS=Y then it is possible for the NEXT storage clause of the statement in the EXPORT file to contain a size above 2Gb. This will cause import to fail even if IGNORE=Y is specified at import time. This issue is reported in [BUG:708790] and is alerted in [NOTE:62436.1] An export will typically report errors like this when it hits a 2Gb limit: . . exporting table BIGEXPORT EXP-00015: error on row 10660 of table BIGEXPORT, column MYCOL, datatype 96 EXP-00002: error in writing to export file EXP-00002: error in writing to export file EXP-00000: Export terminated unsuccessfully There is a secondary issue reported in [BUG:185855] which indicates that a full database export generates a CREATE TABLESPACE command with the file size specified in BYTES. If the filesize is above 2Gb this may cause an ORA-2237 error when attempting to create the file on IMPORT. This issue can be worked around be creating the tablespace prior to importing by specifying the file size in 'M' instead of in bytes. [BUG:490837] indicates a similar problem. Export to Tape ~~~~~~~~~~~~~~ The VOLSIZE parameter for export is limited to values less that 4Gb. On some platforms may be only 2Gb. This is corrected in Oracle 8i. [BUG:490190] describes this problem. SQL*Loader and 2Gb ~~~~~~~~~~~~~~~~~~ Typically SQL*Loader will error when it attempts to open an input file larger than 2Gb with an error of the form: SQL*Loader-500: Unable to open file (bigfile.dat) SVR4 Error: 79: Value too large for defined data type The examples in [NOTE:30528.1] can be modified to for use with SQL*Loader for large input data files. Oracle and other 2Gb issues ~~~~~~~~~~~~~~~~~~~~~~~~~~~ This sections lists miscellaneous 2Gb issues: - From Oracle 8.0.5 onwards 64bit releases are available on most platforms. An extract from the 8.0.5 README file introduces these - see [NOTE:62252.1] - DBV (the database verification file program) may not be able to scan datafiles larger than 2Gb reporting "DBV-100". - "DATAFILE ... SIZE xxxxxx" clauses of SQL commands in Oracle must be specified in 'M' or 'K' to create files larger than 2Gb otherwise the error "ORA-02237: invalid file size" is reported. This is documented in [BUG:185855]. - Tablespace quotas cannot exceed 2Gb on releases before Oracle 7.3.4. Eg: ALTER USER <username> QUOTA 2500M ON <tablespacename> reports ORA-2187: invalid quota specification. This is documented in [BUG:425831]. The workaround is to grant users UNLIMITED TABLESPACE privilege if they need a quota above 2Gb. - Tools which spool output may error if the spool file reaches 2Gb in size. Eg: sqlplus spool output. - Certain 'core' functions in Oracle tools do not support large files - The UTL_FILE package uses the 'core' functions mentioned above and so is limited by 2Gb restrictions Oracle releases which do not contain this fix. <Package:UTL_FILE> is a PL/SQL package which allows file IO from within PL/SQL. Port Specific Information on "Large Files" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Below are references to information on large file support for specific platforms. Although every effort is made to keep the information in these articles up-to-date it is still advisable to carefully test any operation which reads or writes from / to large files: Platform See ~~~~~~~~ ~~~ AIX (RS6000 / SP) [NOTE:60888.1] HP [NOTE:62407.1] Digital Unix [NOTE:62426.1] Sequent PTX [NOTE:62415.1] Sun Solaris [NOTE:62409.1] Windows NT Maximum 4Gb files on FAT Theoretical 16Tb on NTFS ** See [NOTE:67421.1] before using large files on NT with Oracle8 *2 There is a problem with DBVERIFY on 8.1.6 See [BUG:1372172] *3 There is a problem with 8.1.6 / 8.1.7 where an autoextend to 4Gb can cause a crash - see [BUG:1668488]

Espero te servi,
bye
Manny.