7

I am capturing raw output from a decoder which is YUV420. I have got three pointers: Y(1920*1080), U(960*540) and V(960*540) separately.

I want to save the image as JPEG using OpenCV. I tried using cvtcolor of opencv

cv::Mat i_image(cv::Size(columns, rows), CV_8UC3, dataBuffer);
cv::Mat i_image_BGR(cv::Size(columns, rows), CV_8UC3);
cvtColor(i_image, i_image_BGR, cv::COLOR_YCrCb2BGR);
cv::imwrite("/data/data/org.myproject.debug/files/pic1.jpg", i_image_BGR);

But, here is the output image which is saved:

image

Can someone please suggest what is the proper way of saving the image?

YUV Binary files for reference

sgarizvi
  • 15,965
  • 8
  • 64
  • 96
  • Can you share the complete code? Seems like you are not converting YUV420 to YUV444 properly. – sgarizvi May 08 '18 at 09:34
  • we directly tried to convert from YUV420 to BGR. the above snippet is the whole thing used inside a function. Can you please share more information on how to convert YUV420 to YUV444 ? – kakumanu-sudhir May 08 '18 at 09:40
  • You said you've 3 pointers but you're only providing one pointer to `i_image`. – zindarod May 08 '18 at 09:52
  • It looks like the U and V components have to be replicated for 4 Y components. If you can share the individual input images (or the values of `dataBuffer`), I will be able to test a sample code. – sgarizvi May 08 '18 at 09:52
  • @zindarod... Yeah, I think OP is mis-interpreting the pixel values in the `dataBuffer`. – sgarizvi May 08 '18 at 09:53
  • @zindarod... dataBuffer is the Y(1920*1080) plane[0] output of decoder. I am not sure how to utilize the U(960*540) plane[1] and V(960*540) plane[2] in OpenCV. – kakumanu-sudhir May 08 '18 at 10:01
  • @sgarizvi since i have three data buffers, is it okay if i can share the data in binary format files and share it to you? – kakumanu-sudhir May 08 '18 at 10:03
  • Read this [page](https://en.wikipedia.org/wiki/YUV#Y%E2%80%B2UV420p_(and_Y%E2%80%B2V12_or_YV12)_to_RGB888_conversion), notice the total size of buffer, the position of Y, U and V components. Create a new buffer and copy the data from the 3 pointers to the correct positions in the new pointer. – zindarod May 08 '18 at 10:09
  • @sgarizvi Yeah, he just needs to understand the YUV data layout. – zindarod May 08 '18 at 10:09
  • @kakumanu-sudhir... Yes, please share the binary file. – sgarizvi May 08 '18 at 10:18
  • @sgarizvi, i have edited the post with link for uploaded YUV binary files. thanks. – kakumanu-sudhir May 08 '18 at 12:13

2 Answers2

10

Following is the process to convert the provided YUV files into RGB image.

  • Read Y, U and V binary files into byte buffers
  • Create OpenCV Mat object from the created buffers.
  • Resize U and V Mats to the size of Y.
  • Merge Y and resized U and V.
  • Convert from YUV to BGR

Be advised that the resizing step is just an optimized way of repeating the values of U and V. This is only valid in the case where Y has twice the resolution of U and V in both dimensions. This approach should be invalid for arbitrary size images (not tested).

Here is the code for the above-mentioned process.

#include <iostream>
#include <vector>
#include <opencv2/opencv.hpp>

std::vector<unsigned char> readBytesFromFile(const char* filename)
{
    std::vector<unsigned char> result;

    FILE* f = fopen(filename, "rb");

    fseek(f, 0, SEEK_END);  // Jump to the end of the file
    long length = ftell(f); // Get the current byte offset in the file
    rewind(f);              // Jump back to the beginning of the file

    result.resize(length);

    char* ptr = reinterpret_cast<char*>(&(result[0]));
    fread(ptr, length, 1, f); // Read in the entire file
    fclose(f); // Close the file

    return result;
}

int main(int argc, char** argv)
{
    cv::Size actual_size(1920, 1080);
    cv::Size half_size(960, 540);

    //Read y, u and v in bytes arrays
    auto y_buffer = readBytesFromFile("ypixel.bin");
    auto u_buffer = readBytesFromFile("upixel.bin");
    auto v_buffer = readBytesFromFile("vpixel.bin");


    cv::Mat y(actual_size, CV_8UC1, y_buffer.data());
    cv::Mat u(half_size, CV_8UC1, u_buffer.data());
    cv::Mat v(half_size, CV_8UC1, v_buffer.data());

    cv::Mat u_resized, v_resized;
    cv::resize(u, u_resized, actual_size, 0, 0, cv::INTER_NEAREST); //repeat u values 4 times
    cv::resize(v, v_resized, actual_size, 0, 0, cv::INTER_NEAREST); //repeat v values 4 times

    cv::Mat yuv;

    std::vector<cv::Mat> yuv_channels = { y, u_resized, v_resized };
    cv::merge(yuv_channels, yuv);

    cv::Mat bgr;
    cv::cvtColor(yuv, bgr, cv::COLOR_YUV2BGR);
    cv::imwrite("bgr.jpg", bgr);

    return 0;
}

Compiled and tested with following command:

g++ -o yuv2rgb -std=c++11 yuv2rgb.cpp -L/usr/local/lib -lopencv_core -lopencv_imgcodecs -lopencv_highgui -lopencv_imgproc

Following output image is generated by executing the above code:

YUV To BGR Output

sgarizvi
  • 15,965
  • 8
  • 64
  • 96
2

I think OpenCV matrix for your input yuv420 planar image should have 1 channel format instead of 3 channel. Place there Y channel, then U, then V. I found a very similar question HERE Planar YUV420 and NV12 is the same