Thread: Is the Ursa Mini Pro near as good as the Canon C300 Mark II?

Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 31
  1. #21  
    Senior Member
    Join Date
    Aug 2014
    Posts
    1,687
    Returning back to the OP inquiry, the C300 is terrific broadcast equipment while the BMD cameras are for narrative and cinematic production. You can, however, use the C300 for narrative shooting but it be a bit complicated to use the BMD cameras for ENG style broadcast capture. It will need light to get quality broadcast footage.
    Reply With Quote  
     

  2. #22  
    Senior Member
    Join Date
    Sep 2013
    Posts
    806
    Quote Originally Posted by Asyndeton View Post
    Referring to the UMP and Alexa video: Both look great, hard to do a pixel peep comparison without knowing what the settings on both cameras (and lenses) were used, what codecs they were shot with, how the footage was ingested, and what was done in the grade. Based on what's there already it wouldn't be hard to match them even closer. Olan has a great sense for lighting, production design and lens choice among other variables. The user of the camera is very talented, that's why it looks so good no matter what camera he used.

    As for the OPs original question, it really depends on what you're using the tool for as well as personal preference. If you're talking strictly for narrative usage, I'd say the UMP hands-down will beat any and all cameras in the Canon CXXX line just for the codec options alone and BM's Film log is much better for grading than any flavor of C-Log.
    I'm pretty sure he's using Leica R's. Most of Olan's recent work features the Leicas.
    Reply With Quote  
     

  3. #23  
    Quote Originally Posted by John Brawley View Post
    I'm sure it would.

    I guess it depends on if that's what you want. It's nice to have the choice right ?

    There's a lot that prefer the sharpness and extra resolution too.

    https://www.cinematography.net/CE-2017%20STILLS.html

    You can see that it still doesn't remove all the aliasing errors that can be generated (look at the Alexa SXT shots here). And neither does it on the RED Helium either. IN fact the Sony F55 / F65 seem to be the only ones that have their OLPF tuned perfectly.

    You can also see here how the Ursa sharpness pops even against higher resolution sensors (but with resolution lowering OLPFs). Look at the hair and eyelashes full screen.

    https://agdok.de/de_DE/kameratest20172

    JB
    Honestly, although there are some pretty bad aliasing/false colour artifacts on display here I think a lot of that has to do with how the Bayer data was processed after the fact, and I don't have enough information from that first website to double check their work. Pretty much anything there that was shot in CinemaDNG looks like it was processed in Resolve (I'd recognize those maze artifacts anywhere) and it still blows my mind how ugly the BMD demosaic is when it comes to aliasing/false colour. IMO, if you're going to build cameras without OLPFs then it'd probably be a good idea to let users tune the demosaic algorithm to compensate for moiré a little better.

    Just as an example, look at the difference between the C200 and C300 Mk.II; one was DNG recorded with a 7Q and (likely) processed in Resolve, whereas the other one was C200 RAW which was probably processed in Canon's software. You could argue that the patch 3 looks better on the 300 in Resolve, but I think I'd still prefer the muted/subtler output of the other wedges. While neither is ideal, I feel like you could definitely get away with the 200's look if you had to because the false colour on the C300 is so glaring.
    Reply With Quote  
     

  4. #24  
    To actually respond to the OP though, I don't think you can buy a bad camera in 2018 unless you go out of your way. There are going to be some cameras that will be more appropriate for what you're doing, and there are going to be some that have annoying quirks you have to work around, but all told we're getting pretty spoiled by manufacturers.

    If I had my choice, I'd probably go with C300 Mk.2 TBH. While I think there are a lot of factors that make the UMP worth advocating for, I have to service clients and clients often request Canon cameras. Rather than argue with them about differences that will end up being fairly marginal in the end, I think it's just wiser to invest in equipment that you know can guarantee a return.
    Reply With Quote  
     

  5. #25  
    Moderator
    Join Date
    Apr 2012
    Location
    Atlanta Georgia
    Posts
    2,823
    Quote Originally Posted by Alex.Mitchell View Post
    Honestly, although there are some pretty bad aliasing/false colour artifacts on display here I think a lot of that has to do with how the Bayer data was processed after the fact, and I don't have enough information from that first website to double check their work. Pretty much anything there that was shot in CinemaDNG looks like it was processed in Resolve (I'd recognize those maze artifacts anywhere) and it still blows my mind how ugly the BMD demosaic is when it comes to aliasing/false colour. IMO, if you're going to build cameras without OLPFs then it'd probably be a good idea to let users tune the demosaic algorithm to compensate for moiré a little better.

    Just as an example, look at the difference between the C200 and C300 Mk.II; one was DNG recorded with a 7Q and (likely) processed in Resolve, whereas the other one was C200 RAW which was probably processed in Canon's software. You could argue that the patch 3 looks better on the 300 in Resolve, but I think I'd still prefer the muted/subtler output of the other wedges. While neither is ideal, I feel like you could definitely get away with the 200's look if you had to because the false colour on the C300 is so glaring.
    Geoff mentions his process...

    "The images are from the QT UHD files that were created from EXR's geberated by the relevant camera manufacturers own software."

    You say that the Resolve de-mosaic is "ugly" and yet there are options to not use Resolve's own de-mosaic for many platforms.

    In Resolve you can choose which debayer you use. For example, Arri's Debayer is available as a choice, OR you can use the Resolve Debayer. Same with Sony RAW. You can choose Sony's Debayer or you can use the Resolve Debayer. And same again with RED. You ca use theirs or you can use the Resolve one (though it still has to go through their SDK).

    Can you suggest which RAW motion imaging colour correction process you use that you think offers you a better debayer options and gives you the real time colour correction tools that Resolve offers ?

    From what I've seen when within the Resolve eco system, everyone PREFERS the native Resolve demosaic compared to the Arri / RED / Sony algorithms.

    I understand what you're saying and I know where you're going with this, but to make a real time platform that can do all that you want is especially challenging as I'm sure you know. It sounds like you want the deeper options that some photographic RAW processors enable but from what I understand that really and truly is a challenge to do on motion based images with all the temporal processes that get applied.

    So like the way RAW Therapee offers you the demosaic options ? Or Darktable ?

    I like that kind of control but on my machine both of those applications run very very slow and that's only on single frames. Whereas C1, Lightroom and dare I say it Resolve run a heck of a lot faster. I personally love the de-mosaic that I get from C1 even though it doesn't have the kind of control you have in RAW Therapee or Darktable.

    From memory, Geoff (who did the first test) has a preference for Baselight.

    JB
    Reply With Quote  
     

  6. #26  
    Quote Originally Posted by John Brawley View Post
    Geoff mentions his process...

    "The images are from the QT UHD files that were created from EXR's generated by the relevant camera manufacturers' own software."
    Honestly, that still tells us barely anything...

    - Which pieces of software were used exactly, and what versions? As an example, if they processed the Helium footage in RedCine did it have the full IPP2 implementation included? What about the software used to generate the Quicktime files?
    - Also, if footage was shot in any resolution besides UHD, when was that footage scaled to UHD and with what algorithm?
    - What resolution was each camera captured in?
    - What codec/compression ratio was each camera captured in?
    - Which lenses were used and what were they set to?
    - What ISO was each camera set to?
    - Etc.

    If someone is going to shoot a test like this then they need to lay out the variables so we can examine the work ourselves or, better yet, host some stills from the footage so we can process it ourselves too. I really appreciate him taking the time to set up all that, but I want to know if any corners were cut. As an aside, how amazing does the F55/F65 look? Had no idea how amazing the filtering was...

    As for Resolve's debayer, I should mention before I say anything else that I understand I'm being picky and demanding. I also wanna acknowledge that there would be no way for me to overstate the positive effect that BMD has had on my career personally or the industry as a whole; they've made things possible I only dreamed of as a kid in film school. Lastly, I also recognize that you can't squeeze blood from a stone; once aliasing happens at the sensor level there's really only so much that can be done, and you're only ever going to resolve ~82-75% of the resolution from a good debayering algorithm in a best case scenario.

    That said, I've just noticed some weird discrepancies with the Resolve algorithm over the years. Here's an exercise; try shooting a problematic frequency with a BMPCC in DNG and in ProRes. My experience, even as recently as Resolve 14.2, is that the Resolve debayer introduces noticeable maze artifacts whereas the debayer in camera does not. Obviously you still get false detail and rainbows all over the place, but the mazing isn't there. As far as virtually every other variable is concerned they seem comparable, but this one difference perplexes and frustrates me.

    I'm not a colour scientist or a programmer, and I also have no intention of basing every decision I make off of what a focus trumpet looks like. I'm just an end user who has run in to some perplexing issues and I never really got a satisfactory response from BMD Support about why those issues exist. To be totally honest, processing footage that has been well filtered--from, say, my D16 or a blurred porcupine target--actually looks pretty good in Resolve; sometimes preferable to AMaZE. Where I prefer AMaZE's implementation in RawTherapee is when I need to have options so I can deal with specific issues. The long and short of it is that I just don't understand how you could build cameras without an OLPF and then not implement software tools to mitigate problems caused by that design choice. There's a Black Sun fix, a problem that only affects some of BMD's cameras, and yet their debayer algorithm in Resolve continues to have the same issues it has had since since I started using it to process DNGs around v9, and those problems affect DNG footage from every camera BMD makes plus some that they don't!

    Just seems strange is all.
    Last edited by Alex.Mitchell; 01-09-2018 at 02:40 AM.
    Reply With Quote  
     

  7. #27  
    Moderator
    Join Date
    Apr 2012
    Location
    Atlanta Georgia
    Posts
    2,823
    Quote Originally Posted by Alex.Mitchell View Post
    Honestly, that still tells us barely anything...
    If you follow Geoff Boyle from CML, then you would know this is a regular shoot he does, with the details on his well known website. They frown upon cross posting to other forums like this, but you can look though the very many pages there where he lays out in the kind of detail you want how he approaches his tests. Also most of the RAW files are available to download.

    The point I was making originally was only that on an aliasing trumpet, most of the cameras WITH an OLPF also generates aliasing errors.

    OLPF's are highly problematic. It's not just to have or not to have, they have to be very carefully tuned, and that means making a subjective judgement about how much resolution to throw away, something that is often to many as problematic as not having one in the first place. Look at the total debacle RED created in the not to distant past.

    Quote Originally Posted by Alex.Mitchell View Post

    As for Resolve's debayer, I should mention before I say anything else that I understand I'm being picky and demanding.
    Yes and that's why I asked it as a leading question. Because if you're trying to process images in real time or near real time for moving images, then you have massive problems trying to create a workflow that can deal with what you're asking for.

    And an improvement in the debayer won't remove the aliasing artifacts, it may only just reduce their perception a little. But at the cost of a massive performance hit ? Is it worth it ?

    I know some users who've taken to processing DNG's in batches through ACR or C1 to remove aliasing and then going back into Resolve.

    Quote Originally Posted by Alex.Mitchell View Post
    Here's an exercise; try shooting a problematic frequency with a BMPCC in DNG and in ProRes. My experience, even as recently as Resolve 14.2, is that the Resolve debayer introduces noticeable maze artifacts whereas the debayer in camera does not. Obviously you still get false detail and rainbows all over the place, but the mazing isn't there. As far as virtually every other variable is concerned they seem comparable, but this one difference perplexes and frustrates me.
    I know that there are differences to how ProRes affects things like aliasing artifacts and noise because of the way it's filtered (pre filtered?) to be able to be converted into ProRes in camera. I notice this way back on the original BMCC because it actually can appear to have better noise / low light performance when you shoot in ProRes.

    I haven't checked in for a while, but the algorithm is the same in Resolve as it is in camera, EXCEPT if you use the "force higher quality" option.

    Quote Originally Posted by Alex.Mitchell View Post
    The long and short of it is that I just don't understand how you could build cameras without an OLPF and then not implement software tools to mitigate problems caused by that design choice. There's a Black Sun fix, a problem that only affects some of BMD's cameras, and yet their debayer algorithm in Resolve continues to have the same issues it has had since since I started using it to process DNGs around v9, and those problems affect DNG footage from every camera BMD makes plus some that they don't!

    Just seems strange is all.
    I do know that this is something they have someone working on full time. Just working on improving it. I know it's gone through several iterations since I have been working with them and I can't say more than expect it to be getting better soon.

    The imaging trend in stills photography is to not have an OLPF at all. Most of the cameras I've bought recently don't have an OLPF and even say they don't have one as a selling feature....Leica S, Leica M digitals (M8, M9, M240, M10) Olympus EM 1 Mark II. BMD have made it possible to fit an OLPF optionally later. There are multiple after market options for some of their cameras. And for RED cameras too. You kind of can choose to have them or not. Isn't it good to be able to make that choice ? In the clip above the 8K RED looks softer than the 4.6K Ursa Mini. Maybe some of that detail is false detail, but I think RED are overly aggressive now with their OLPFs. Also, you praise the Sony based on the chart, but I had big aliasing issues with the F55 on a shoot a couple of years ago, the only time I've actually had shots rejected at QA.

    I've shot many cameras in motion and stills over the years with and without OLPFs and I'd rather deal with the sharper images with some false detail and aliasing of no OLPF than soft images that still sometimes have aliasing issues. I have only once had to fix aliasing in stills because of a particular fabric, and once in motion on a camera that has one of the better OLPFs.

    I don't think it's as cut and dried as you present.

    JB
    Reply With Quote  
     

  8. #28  
    


    Quote Originally Posted by John Brawley View Post
    In the clip above the 8K RED looks softer than the 4.6K Ursa Mini.
    Does it though? Even if we could compare an 8K camera to a 4.6K camera in a 1080p web video, the data seems pretty clear to me; Helium is nearly out-resolving any lens you throw on it, so if you see soft 8K footage it's probably not because of the OLPF. To wit, I just downloaded those EXRs from Geoff’s shootout and none of that aliasing/moiré is in the Helium imagery. Here's what the Scarlet, C200, UMP, and Helium actually look like...



    Of course the Scarlet’s appropriately filtered image is the softest but aliasing has been adequately controlled to my eye. Moreover, I have to enlarge the Scarlet’s image here by 1100% to be affected by the “softness” whereas the UMP false colour jumps out pretty harshly when viewed normally, and the C200’s OLPF isn’t doing itself many favours either, IMO. Not like the UMP doesn't have a lot of fantastic features that advocate for it, but this is obviously one of its Achille's Heels.

    10 times out of 10 I’ll take the softer image so that I never have to play Rainbow Roulette when I shoot on set. You mentioned near the end of your post that you’ve only had to deal with aliasing twice in your career, and all I can say is that I have definitely not been that lucky and neither have many of my colleagues. I personally think OLPFs are a no-brainer, and until pixel density reaches the point where they become unnecessary I refuse to use a camera without one. Call me crazy, but I feel like we’re at a point in sensor design where we do not need to chase the perception of sharpness at the expense of contamination. Maybe that makes me a heretic though?

    Quote Originally Posted by John Brawley View Post
    OLPF's are highly problematic.
    Are they though? Other than “softness”—which feels more like an uncharitable way of describing a sensor that is resolving the maximum amount of detail it can without introducing aliasing—what “problems” are caused by properly designed OLPFs that aren’t also caused by the sensor stack or sensor design in general? I feel like the problem with OLPFs, if you can call it that, is that a great many end users don’t understand sensor design/signal processing well enough to know what they can reasonably expect from a digital imaging system.

 Just for my edification, what problem did RED have with their OLPFs? The only problem I remember was when they claimed that Dragon would be a stop and a half faster than Mysterium-X, which they had to walk back on after release and had to further walk back on after implementing the STH OLPF. Unless we’re thinking of different things, that doesn’t really strike me as an OLPF problem as much as a messaging problem.

    Quote Originally Posted by John Brawley View Post
    The imaging trend in stills photography is to not have an OLPF at all.
    There are absolutely stills cameras without OLPFs, but they can get away with that because those sensors are, typically, much denser than 4K S35 sensors. The BMD 4.6K sensor has a density of 33,000 px/mm^2, whereas the E-M1 Mk.2 you mentioned has roughly 90,600 px/mm^2. There does come a point where an OLPF becomes unnecessary but we’re absolutely not there with S35 4K sensors; not by a long shot.

    Quote Originally Posted by John Brawley View Post
    BMD have made it possible to fit an OLPF optionally later.
    Mm, that isn’t how it feels but I’m open to changing my interpretation. To me, the difference between how RED is approaching this and how BMD is approaching this is that the sensor stack in RED cameras from the Epic Dragon forward are expressly designed to be user serviceable. BMD made the call—for what are probably solid logistical, financial, and engineering reasons—not to include an OLPF in their sensor stack, and a cottage industry of warranty-voiding OLPF installation kits has sprung up over the years. One company is officially sanctioning OLPF replacements and the other isn’t, unless there’s a policy on these I’m not aware of.

    Again, I just can't see advocating against OLPFs on the current crop of 4K S35/2K S16 sensors. Yes, we all want more detail but the way to do that is by increasing the sampling frequency, not by introducing image artifacts. As for the debayering stuff, I'm gonna have to come back to that when I'm not so exhausted.
    Last edited by Alex.Mitchell; 01-11-2018 at 04:29 AM.
    Reply With Quote  
     

  9. #29  
    Senior Member shijan's Avatar
    Join Date
    Apr 2012
    Location
    Odesa, UA
    Posts
    738
    Agree. I also always prefer softer but 100% rainbow free image to fake sharpness with wired rainbow artifacts.
    Reply With Quote  
     

  10. #30  
    Senior Member
    Join Date
    Sep 2012
    Posts
    955
    As someone who has used Red cameras starting with the Red One and BM cameras starting with the BMCC, I can say that I also prefer softness to rainbows. And I've seen massive aliasing with Sony's F3, it actually took them a long time to do just barely acceptable filtering on recent models.

    Many excellent DOPs use diffusion on digital sensors to take that digital harshness away. Do you remember that anlog film had about HDTV resolution when viewed in cinema?

    Yes, everybody called the Red One too soft for a 4K sensor. And, yes, they made huge mistakes when fine-tuning their Dragon OLPF. The latter proves what John writes about the difficulties of designing a proper OLPF and may be one of the reasons why Red is more expensive. BM made the decision to not have an OLPF. Let's face it, it was probably an economic decision too. Maybe it would have been wise to design their own OLPF and offer it as an option.

    Finally, it's getting better with higher pixel density, that's true. We got stuff rejected for moire a few times with the BMCC, but it didn't happen yet with the UMP46. But then, I like vintage glass on our cameras, that helps. Even with 4.6K, many lenses don't out resolve the sensor any more.

    Stills cameras, like my Sony A7R II, are a different subject altogether. It outresolves most lenses and some rainbow on a still is far less distracting. On a motion camera, rainbows are moving when the camera moves and draw a lot of attention.
    Reply With Quote  
     

Similar Threads

  1. Ursa mini 4.6 and c300 mk ii match
    By Matt_Rozier in forum General Discussion
    Replies: 2
    Last Post: 05-29-2017, 10:12 AM
  2. Should I buy the Ursa Mini 4.6K over the FS7 II or C300 II?
    By mizzar in forum General Discussion
    Replies: 26
    Last Post: 01-08-2017, 05:36 AM
  3. This. Looks. Good. (Ursa mini 4.6k)
    By jambredz in forum Footage / Frames
    Replies: 6
    Last Post: 04-24-2016, 06:55 PM
  4. Ursa Mini 4.6k vs C300 Mark II - What looks better?
    By alex1 in forum General Discussion
    Replies: 50
    Last Post: 04-26-2015, 08:26 PM
  5. Canon C300 mark II
    By Liszön! in forum Off Topic
    Replies: 2
    Last Post: 04-08-2015, 12:29 PM
Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •