15

enter image description here

Today's newspaper had a Magic Eye or Autostereogram image in the kids section ... started me wondering could tools available to blender, be used to render a simple scene (Suzanne of course, repeating grease pencil Suzanne) as a Magic Eye?

enter image description here

Image and python code from Saving Autostereograms AKA Magic Eyes Albeit explaining finding the 3d object from image.

Another python generator here magicpy which takes a greyscale bitmap as input. Which could be used in post-pro.

batFINGER
  • 84,216
  • 10
  • 108
  • 233
  • 1
    I reckon it’s feasible. Obviously just coding it in python and outputting an image is somewhat cheating... so either compositor or perhaps a shader with some image feedback would work. – Rich Sedman Mar 07 '21 at 16:09
  • 1
    FWIW That's my gut feeling too. ... but my knowledge of howto is very limited ---> resorting to cheating. – batFINGER Mar 07 '21 at 16:27
  • It’s essentially just a horizontal clone based on depth at that point (I developed one in C many many years ago as part of a uni project). The tricky bit will be repeating it as you need to recursively cover the whole image. I reckon a node group in the compositor simply repeated a few times should do it. – Rich Sedman Mar 07 '21 at 17:36
  • 2
    Managed to get a basic setup working in the compositor as a proof of concept. Here's the link : https://imgur.com/a/OzO3sql I'll write a proper answer and explanation shortly. – Rich Sedman Mar 08 '21 at 13:54
  • You are a legend. Did imgur crash last night,? tried 4 times to put in the animated shark from wiki. – batFINGER Mar 08 '21 at 14:49
  • I was like obsessed with these as a kid. – Allen Simpson Mar 08 '21 at 15:12
  • Remember people staring at framed versions in shopping centres. Thought they had some weird taste in art.. (like who knows with some of it?) , until I finally twigged as to what they were seeing. – batFINGER Mar 08 '21 at 15:42
  • I didn't have a problem with imgur - the problem must have been fixed when I tried. I remember standing in a shop looking at an image and wondering what all the fuss was about and then having the realisation that there's something there and being able to see it. I'm in the process of putting tother an answer for this.... struggling with the explanation and making it understandable! – Rich Sedman Mar 08 '21 at 23:02
  • 2
    Remember shopping? – Allen Simpson Mar 08 '21 at 23:22

1 Answers1

14

A "Magic Eye" image encodes the depth information into the repeating horizontal pattern such that "going cross-eyed" and matching up adjacent copies of the image fools our eyes into interpreting it as "depth". This can be demonstrated by looking at the following image :

3 suzannes

Look at the central image. Relax your eyes so that you're looking "through" the screen so that one eye is focussing on the middle one but the image of the left or right-hand one (which one you find easier will probably depend on your "dominant" eye) crosses over to coincide with the middle one and you should see 4 monkey heads (you may need to position your head closer or further from the screen for this one). This is essentially what you need to do with a "Magic Eye" picture - but with the repeating pattern coinciding with the adjacent almost identical clone of the pattern. Note that for this to work your head must be aligned left-right to the horizontal of the image.

The same principle is behind a full "Magic Eye" image - where two adjacent and almost identical sections of the pattern coincide from both eyes to fool your brain to perceive depth. This is achieved by using a 'seed' texture at one side of the image and cloning it across the page based on the 'depth' of the original rendering at that point. The wider the spacing between 'clones', the further 'into' the page the surface will appear. We can achieve this in the compositor by using the Depth information from the render to control the offset of a Displace operation to clone the seed texture across the image.

Start by generating the 'seed' texture. This can be any image but it needs to have enough contrast and detail. One edge of it will be used to clone and repeat over the final image to produce the effect (depending on which direction we're going to clone left-right or right-left). The distance between copies will fool the eye into seeing the depth at that point. For my initial renderings I simply used a Distorted Noise texture as shown :

seed texture

The clone is achieved by using the Depth at each point in the render to displace the texture and using Alpha Over to combine with the previous image (so that it's a clone rather than just displacement). Note that anything outside the bounds of the image will be ignored (transparent) so that we don't copy from outside the image boundary over our seed. Everywhere else we'll get the texture displaced based on the distance at that point - so it will be a clone of the seed horizontally in the image.

one clone

Note the inputs of the groups are passed out to the outputs. This will allow multiple instances of the clone to be easily chained together.

Connect the clone up to the texture along with some nodes to adjust the raw Depth values to a suitable displacement distance. Note that smaller displacements will make for a "flatter" final result.

one iteration (seed connected to one 'clone' group)

By repeating the 'clone' group mulitple times we effectively clone to each subsequent 'band' of repetitions of the seed. This can be visualised by taking a snapshot of the result at each stage along the chain :

sucessive clones

Note how the initial 'seed' is cloned to create the first 'band' in the first frame and each subsequent stage leaves the left-hand region unaffected (since that's already a displaced clone of the section to the left, according to the depth map) and adds on the next 'band' which was previously just a displaced copy of the entire seed texture.

If there aren't enough bands to reach all the way across the image there will be "flat" regions on the far side - link in more copies of the 'Clone' group to resolve this (although the more 'Clone' operations you add, the more work the compositor will need to do to complete the render). The number of required iterations depends on the size of your Multiplier - larger values fill the image quicker by having a wider spacing (and so larger apparent depth). Smaller values give a shallower image but need more iterations since the bands are narrower and so closer together (meaning you need more of them to span the whole page).

lots of iterations

Note that the Depth of the original render is Normalized so as to range from 0.0 to 1.0 and an offset (in this case 1.0) added to give a minimum spacing.

This produces the following result :

result

Any 'seed' image can be used - the only requirement is that it is varied and contains small and distinguishable details. I experimented with a family of monkeys as I thought that would be most appropriate :

result2

Blend file included

Rich Sedman
  • 44,721
  • 2
  • 105
  • 222