python - IOError: Can't read data (Can't open directory) - Missing gzip compression filter -


I have not worked with HDF 5 files before, and I have got some example files to get started. Looking at all the basics with h5py , seeing different groups in these files, their names, keys, value and so on. Everything works fine, as long as I want to see the saved datasets in the group. I get my .shape and .dtype , but when I index a random value (such as grp ["dset"] [0]

  IOError Traceback (most recent call final) & lt; Ipython-input-45-509cebb66565 & gt; In & lt; Module & gt; (Print 1) Print "Live" ["Matrix"] 2 Print Geno ["Matrix"]. Dtype ---- & gt; 3 geno ["matrix"] [0] /home/sarah/anaconda/lib/python2.7/site-packages/h5py/_hl/dataset.pyc __getitem __ (self, args) 443 mspace = h5s.create_simple (mshape ) 444Fface = Selection ._id - & gt; Patch in h5py.h5d for output 445 self.id.read (mspace, fspace, arr, mtype) 446 447 # NumPy /home/sarah/anaconda/lib/python2.7/site-packages/h5py/h5d.so Please. DatasetID.read (h5py / h5d.c: 2782) () /home/sarah/anaconda/lib/python2.7/site-packages/h5py/_proxy.so at h5py._proxy.dset_rw (h5py / _proxy.c: 170 9) () /home/sarah/anaconda/lib/python2.7/site-packages/h5py/_proxy.so h5py._proxy.H5PY_H5Dread (h5py / _proxy.c: 1379) () IOError: Data can not be read ( I can not open the directory)   

I posted this problem in it, where it was suggested that I may have a filter on the dataset which I have not installed. But the HDF 5 file was created using only gzip compression, which should be a portable standard, as far as I was deemed.
Can anyone know what I can remember here? I can not find this error or similar problems anywhere, and files, including problematic datasets, can easily be opened with HDFVU software. Obviously, this error occurs because, for some reason, gzip compression filter is not available on my system. If I try to create an example file with gzip compression, then it happens:

  -------------------- ----- --------------------------------------------- ----- ValueError Tracebacks (Most Recent Call End) & lt; Ipython-input-33-dd7b9e3b6314 & gt; In & lt; Module & gt; () 1 grp = f.create_group ("subgroup") ---- & gt; 2 grp_dset = grp.create_dataset ("dataset", (50,), dtype = "uint8", segment = true, compression = "gzip") /home/sarah/anaconda/lib/python2.7/site-packages/h5py /_hl/group.pyc create_dataset (auto, name, shape, DTEP, data, ** kwds) 92 "" 93 ---> 94 dsid = dataset.make_new_dset (auto, size, DTP, data, ** kwds) 95 dset = dataset.netset (DSID) 96 if the name is not none: /home/sarah/anaconda/lib/python2.7/site -packages / h5py / _hl / dataset.pyc make_new_dset (parent, shape, dtype, data, chunks, compression, shuffle, fletcher 32, maxshap, compression_opsets, fallow, scale offset, track_time) 97 98DCPL = filters Generated_despple (figure, dTOP, volume, compression, compression_opts, ---> 99 shuffle, Fletcher 32, maxshap, scale offset) 100 101 If there is no fillvalue: /home/sarah/anaconda/lib/python2.7/site -packages / generated_discal in h5py / _hl / filters.pyc (size, dtype, chunks, compression, compression_opts, shuffle, fletcher 32, maximum size, scale offset) 102 102 If the compression is not in the encoding: - & gt; 103 rais A value error (compression filter '% s' is unavailable '% compression') 104 105 If compression == 'gzip': value error: Compression filter "gzip" is unavailable   

Does anyone have experience with that? The installation of the H5P package was not wrong with the HDF 5 library ...

not only comment Can - Reputation is very low.

I had only one problem, just went to "Crown Update Anaconda" and the problem is over.

Comments

Popular posts from this blog

java - ImportError: No module named py4j.java_gateway -

python - Receiving "KeyError" after decoding json result from url -

.net - Creating a new Queue Manager and Queue in Websphere MQ (using C#) -