Re: Low-res screenshots

Thanks for the offer, Ronan, but I really want to try the photographing route first; the internet is sadly lacking of this even today. That sample is not mine, but I like to think I can get a result like that with a bit of practice (and a better camera). I indeed believe that 15-kHz BVMs are not good for gaming in general. Over 600 TVL you get too-evident black lines which go against the intelligibility, so yep; well-calibrated mid-to-low-end PVMs/mid-to-high-end Trinitron TVs (or good shadow-mask monitors) is where it's at, when we're talking low-res gaming. As for curvature, I'm not sure, to be honest. Most likely its effects can't be properly simulated, so why bother then.

If I recall, you tried the printed-mag effect with direct-feed screens, but what about subscaled and resampled photos? Do you think that this:


...can have with Photoshop, not just better focus, but a texture like the Sailor Moon screen posted above?

My non-answer to the interlace issue -- I wouldn't bother since interlacing was almost never a desired way of displaying video games (that is, if the devs had had the technology, they would have used 480 progressive). In other words, 480-P is, most times, the proper way of viewing 480-I. Technically it sure is -- the former indeed re-builds the picture which was broken for technological reasons. So I'd just use the progressive version of the picture.

And when coming down to the meaning of "screenshot", you end up with "still image", so trying to mimic the alternate scans (or, better put, its effect), will result in a "video", not a "screenshot", if that's what you mean with "combine two consecutive fields". Even if you're fine with a video, I don't think you can get something palatable nor faithful, yeah.

But if you're accepting a video, then why not an actual photograph of the interlaced picture -- you'll lose the flickering (though it is said that there were monitors quite good at eliminating the flickering -- I don't think I ever saw one), but the rest of the effects will be there. If there's no possibility to take photos, then you should simulate those effects on the direct feed screenshot (blur and de-focus, essentially), keeping in mind that you should start adding alternate black lines much like you did with "low-res" screenshots when multiplying your 480-lines screen in order to avoid blocky graphics, but you should also get them almost invisible, whatever it takes. Not easy, indeed.

On a 19'', 1280 x 1024 LCD, I get this:

The latter is clear-type crap which I can't understand how anybody can be happy with (notice this goes only for the smaller fonts; smoothing does improve the bigger ones, and that's what WIN XP did). I'm sure it'll get less crappy with higher res, but how many people use over 1080 these days for web browsing? I'm more and more inclined to design and post the new site as .PNG pages, I won't lie.


Re: Low-res screenshots

These people tried the printed-mag effect, with worse or better results:



Re: Low-res screenshots

Error 404, Recap...


Re: Low-res screenshots


http://www.jeuxvideo.com/news/629291/pa … -drive.htm

(Not the thumbnails.)

80 (editado por Ronan 28-04-2017 20:33:59)

Re: Low-res screenshots

Back from the dead again, sorry... I had tried some things with your image, but I did not really get what you were meaning with the texture:

Now, thanks to your link, I think I can see it. Replicating such noise should be possible. Here is a quick try with noise added to the CMYK channels of the previous image:

Is the difference between the two images a step in the right direction?


Re: Low-res screenshots

Hi, Ronan. Thanks for trying. Between the two samples, I don't think there's noticeable difference regarding the effect I'm looking for, though both are better than the original screen. But the problem is previous to your attempts -- the downscaled screen I posted is awful. It's those scaling artifacts I always get in my samples (especially noticeable in the brighter areas, the score display...), what will never let us get a good result, I'm sure.

Check this again:


Since I'll be using scanned art to illustrate every article, my goal is to get screens which look like they have been scanned from printed paper too, otherwise both forms just don't fit in with each other well enough. And besides, it's the only way I know to properly make small screens like these look both, real and nice.

In case you need more examples:

https://s4.postimg.org/4gz7kavhp/s-1.png https://s4.postimg.org/prwrokdm5/s-2.png
https://s4.postimg.org/wwekxlkvh/s-3.png https://s4.postimg.org/xbug4cw6l/s-5.png

I though it could be a matter of focus and some texture, but maybe downscaling photographs is not the way, no matter what.


Re: Low-res screenshots

Just to make a minor but hopefully very useful correction to some of the older discussion here, which I'm not sure has since been refined. It maybe seems a bit after-the-fact or interrupting, but if people are making emulators, filters, general advice-to-others about how to set up arcade emulators etc based on incorrect knowledge, no matter how earnest or well meaning, then it needed to be fixed.

There seems to be some arguing from mistakenly false basis with regards to PC98 resolution and other such things. One part is assuming that everything that appears within the frame of a CRT monitor is a deliberate act by the computer, which is not the case - a dark area may as much be that the beam simply doesn't sweep there / the machine doesn't generate any imagery at all and its graphics hardware is turned off, as much as it being a purposeful black border (honestly, when they're already pushing the boundaries of what the contemporary technology can achieve, why waste space like that?). A counterpoint to which is assuming that any photo you see of an old monitor is of one that's been set up correctly... also not necessarily true!

The other is that everything always displays at 60hz... which isn't true either :)
And that pixels either are always square, or that on an old computer they never will be... again, not necessarily so.

I've been looking these things up over the last couple days and have managed to refine down the oddly very patchy, spread out and contradictory info to what seems a reasonably consistent final set, and this is essentially what we get:

15khz mode - it is very similar to BUT NOT EXACTLY like the typical TV-type modes of endless other machines. Not sure why, it could be that this is itself a mistake that will be ironed out again later. People just don't seem to be able to operate a calculator correctly when it comes to Japanese computers, IDGI.

Thus, a pixel clock of 14.318mhz (ie 4x NTSC colour burst), a total count of 896 pixels per scanline (so roughly 71% active), and a total of 261 lines per frame (vs the more normal 262 or 263, or 262.5 when interlaced), which is ~77% active.

That gives a horizontal scan rate of 15.98khz, and a frame refresh of 61.23hz ... which is OK for most TVs but probably gives scan converters a headache, and would be a right pain to record on video.

24khz mode - it's sort of standard for 1980s/early 90s Japanese computers but is not found hardly anywhere else. The closest equivalent is the Apple 12" Color display for the Mac LC, which runs at 512x384 and 24.48khz. It is NOT quite the same as most Arcade 24khz monitors, though it's close.

Oddball 21.053mhz pixel clock, I've yet to find out what this is derived from or why.
848 pixel times per line (75% active) = 24.83khz horizontal
Total of *440* lines per frame = 91% active and a 56.42hz refresh rate

Note that final line, it's pretty crucial here. All of those additional 40 lines are for blanking or sync, there is no video information in there, "black" or otherwise, generated by the computer, it's dead space, much of which the monitor doesn't even show as it's busily changing the vertical deflection polarity all the way back from "fully down" to "fully up". 480 lines is NOT possible in 24khz, let alone something that needs to be deliberately added for some reason. The only reason you might see letterboxing is that it's essentially showing a 16:10 image (8:5, 4:2.5) on a 16:12 (4:3) screen, and if the software was made to use square pixels rather than ones stretched to 120% of normal height, you have to adjust the pic to be a little squashed. If you broke open the cable and forced a full-white signal onto the image lines whilst keeping the sync etc running normally, and the monitor didn't auto blank, what you'd see would be a white rectangle extending just a little further up and down vs where it normally would reach, with letterboxing still beyond it, and some shallow vertical lines of even brighter white running across the image showing where the beam was flying back and would normally be turned off.

If you didn't care about square pixels or it was actually intended, you'd stretch the vertical height and the letterboxing would disappear. Or if you happened to have an actual 16:10 monitor designed for 640x400 with square pixels, it would have no borders.

In essence, it's exactly the same as showing a 640x200 or 320x200 image from any other machine on a 4:3, 15khz monitor that allows image adjustment, or indeed a 320x200 (or 640x200, 640x400) picture generated by a regular PC on a 4:3 VGA monitor. In all those cases, there's still nothing else there unless it's a system that generates a flat-colour border around the main picture just to fill out the overscan area, but the actual shape of the active image is at least partly dependent on the user and their preferences, and what they set. A 320x240 ModeX or 640x480 VGA image may fill the screen, but it doesn't mean a 320x200 or 640x400 one which doesn't is only like that because of hardcoded borders. Think of it more like a TV or DVD player that adapts an anamorphic image by sticking black borders of its own at the top/bottom or sides - they don't appear in the original material, it's just the screen has added them in afterwards.

(Well, the DVD player is more like a VGA card showing an EGA hi-rez image, which is the exception to the rule in a lot of ways - in that one case, 25 blank lines are added to the top and bottom to fit 350 into 400. But they're still not really "black", officially they're "blank", and the logical part of the system knows nothing of them. It's the video card alone that's put them there.)

So, burning-in letterboxing to 640x400 PC98 images is just the same as doing similar to bulk out 640x200 to 640x240 or 320x200 to 320x240... or indeed 640x200 to 640x480 instead of just stretching to 640x400. IE, wrong wrong wrong. And in fact, as there are only 440 scanlines generated by the hardware, completely at odds with how the monitor works because now you have 40 lines of something that it can't even display!

On the other front, Arcade 24khz screens do indeed only run up to "384p", like the Apple one. That's because they DO run at 60hz (mostly?), and only scan something like 416 total lines. With a digital display, you could easily manage 400 lines of active output within that, but unless you have very, very good analogue hardware (of a kind that only really came along right at the end of the CRT lifespan, in SXGA or higher rez monitors), there just isn't enough blanking and sync area to allow proper flyback, and you'll get distortion. Arcade medrez monitors are actually just a whisker off 25khz most of the time (24960hz = 416 x 60.0), and actually scan faster than the PC98/etc model...

How the computer monitors get around this is by lowering the refresh rate, in this case to 56.42hz. This may seem rather low and a bit strange, but it's actually very close to the original SVGA standard (56.25hz), and still a fair bit better than PAL and some early PC monitors (MDA etc) which rate 50.0hz, and of course some extended resolution interlaced displays at 37, 43, 47hz... Also if you try to show any of these on a later generation CRT it'll look horrible and flickery, but these older screens had different, long-persistence phosphors that reduced the flicker effect and made it acceptable to look at without your eyes falling out.

That small, less than 10% reduction in refresh rate means a corresponding small increase in available number of lines at the same horizontal scan rate - so up from 416 to 440 total, and 384 to 400 active. Handy! And means you can, if your native written language is Japanese/Chinese pictograms, finally get a decent amount of legible text on screen at once without having to use 15khz interlace mode, which is even more flickery and horrible.

So, hopefully that helps to clear up two separate problems :)

For what it's worth, the 31khz mode of the PC98 is *nearly* like 400-line VGA, but again misses by a tiny amount (it's like 1 scanline different). It's not worth a serious deconstruction here because it'll work just fine with any VGA monitor, but it still has the same caveats --- it doesn't run at 60hz, and it doesn't have 480+ total lines. In fact, it runs *faster*, at just over 70hz refresh, and the total lines are about 450 (you need to allow a few more for the flyback when each individual scanline is shorter!)... And if you don't believe that you can look up the actual VGA specs and check those instead. 480 line mode scans at 60hz with 525 total, 400 line at 70hz with ~450 total. So again there's no "hidden" 40-line-thick letterbox at top or bottom... it's simply areas of the screen the monitor doesn't paint on. VGA-400/70 is mostly used for text mode as it still fits 25 lines of good quality font, but with a little less flicker than the (normally used for graphics) 480/60 one, which means you can work on it for 8 hours a day doing Serious Things and not suffer as much eyestrain, then change mode when you want to do something more arty.

Other than that technical difference, where swapping your 24khz PC98 monitor for a 31khz one is essentially an exercise in flicker-reduction (and, ultimately, with the later models, allowing use of 640x480 mode when the graphics card finally supported it) and putting extra strain on the graphics circuitry (as it has to generate images about 30% faster!), there's nothing at all different between those modes as far as the computer, the image-building (rather than displaying) and buffering hardware, or the software is concerned. It's still 640x400 pixels, with 16 colours out of 4096...

83 (editado por tahrey 10-10-2017 21:06:28)

Re: Low-res screenshots

(anyway, I hope no-one gets offended by that or thinks it's a horrible criticism... the info isn't always immediately available or clear, and if this wasn't a rather neat project that I figured was worth keeping on the right track I wouldn't have bothered commenting. Just gotta make sure the right information gets spread around! If it had already been addressed in the other 2017 comments, then mea culpa, I ended up skimming them a bit as I've both got a bit of a headache, and had something else I needed to do after writing the previous entry)

((and whilst I'm on the subject, if anyone knows whether the Atari high-rez monitors on both the ST and TT were based off, or happen to share their scan structure and timing with any other model, rather than being something their engineers came up with by basically rolling dice, that'd be useful ... rather a smaller scale / more limited-interest thing, but as they use somewhat weird setups (partly due to everything being timed off 8.01mhz, when almost everyone else used something related to NTSC) that are supported by basically nothing else, other than reasonably flexible SVGA and forgiving multisync VGA monitors - certainly not scan converters and capture devices! - it could be handy for the future to know if there are other platforms that happen to use similar or identical monitor settings...))


Re: Low-res screenshots

*rolls through other posts on this sub-page (ie since 25th feb this year)*

* Cleartype, I find really nice on actual LCDs, it's just a shame that MS implemented it at the screen-compositing / initial text rendering level rather than as something in the graphics drivers. Having it appear in screenshots is a really crappy side effect of something that, by rights, should be dealt with more by rendering the text to the hardware in monochrome with 3x the width and letting the GPU chop it up into individual channels for display, so it still comes out as plain antialiased greyscale in screenshots (to say nothing of the chromatic aberration caused with coloured text...). On CRTs and projectors, and LCDs not properly tuned-in to it (or turned portrait, etc) it's hopeless and only regular antialiasing should be used.

It's not a problem of the actual subpixel rendering concept, just a rather cheap and careless implementation of it. Though doing it "properly" would probably require a hell of a lot of work and redesigning certain core OS components at a pretty basic level. Let's just wait it out until 4K becomes the standard for everything, then it will be moot anyway...

(In the meantime, it's not exactly hard to turn off... problem is, not all software actually pays attention to that setting any more!)

* Recreating the interlace "shimmer" effect need not turn the result into a movie. You only need two frames, with relatively subtle differences between them. So long as you don't need more than 256 colours overall, you could make a 2-frame GIF. (OK, it's imperfect, not least because Compuserve were stupid enough to define its playback rate in terms of 1/100th-sec delays so you can pick 14.3, 16.7, 20, 25, 33.3, 50 and 100fps, but not 60, 30, 24, or 15, but it can come close and there's still no real alternative thanks to the failure of MNG)

In fact, it's possible to cut up GIFs such that they have multiple "patches" with their own 256 colour palette all drawn near-simultaneously in the same file, with transparency working between them and priority based on order within the file (ie whatever's drawn "last" is on top). So even if your file ends up actually needing a good 4096 colours between the full-brightness ones of the active scanlines, and the darker fading-out / partly lit interlace ones, that's potentially "just" 16 patches so long as there's not too many all in the one area, with an alternating transparency mask blanking out the lines that aren't to be drawn in that stage (well, OK, you'd only get 255 per palette because of having to reserve one as transparent, but you still get 4080 overall). Two sets of those and you can have your shimmering interlaced scanliney image.

Though it might be far more sensible to use a simpler combination with two PNGs (or even one PNG with the image duplicated side-by-side with different lines dimmed/brightened in each) and a bit of javascript to alternate between them at 60hz, so long as you had enough control over where and how they're displayed. Still not a movie, just a picture being displayed in a certain way. Easier then also to pre-render it to look much more natural and realistic, with the proper bloom of the lit lines and the fadeout of the unlit ones. And indeed with three pictures / frames, it could be made universally compatible - the script showing a composited, static, all-lines version of the pic (equivalent to a photograph taken at 1/30th sec shutter speed) first up, as would a copy of the page with JS turned off (and that's what would download if right-clicked, too)... but there would be the option to click on them to turn the interlace shimmer effect on or off, at which point it would transition to alternating between frames 2 and 3 at 60hz, going back to static frame 1 if clicked a second time (because, yknow, it might give people a headache).

* Monitors with too high a resolution, meaning the scanlines get TOO severe... well, this does mean fiddling around with the guts of a sensitive and indeed somewhat dangerous CRT, but if you have the necessary attitude towards safety and can get hold of the relevant service manual or schematics, wouldn't it be enough to just defocus the beam(s) a little? Or if the monitor makes use of deliberate extra-high-frequency vertical oscillation to make the beam seem "taller" without massive horizontal blurring, turning that effect up higher? There should be adjustments available for both after all, as they're things that would have been calibrated at the factory. Ideally a well made and thoughtfully designed monitor should be able to adjust one/other/both in response to the input resolution and scan rate (as well as the beam power, so lower line counts don't end up producing a very dim image from spreading out too few electrons over too large an area).

Be wary of "getting rid of" interlacing because you think progressive is the "correct" way of viewing video content, by the way... that's as maybe if you're in control of the video recording or other signal generation / video memory filling in the first place, so you can be certain that you're writing a nice clean progressive image to the memory and then out to the screen and catching it in the camera, all at a silky smooth high framerate (at least 48hz, preferably 60 or more). If however you're dealing with EXISTING content, "repairing" it in that way can cause as much harm as good. A careful decision has to be made about whether to do that in the first place, and then if you go ahead with it, you have to do it RIGHT. Sympathetic deinterlacing that doesn't visit hard- or impossible-to-undo damage on the underlying image can be as much of an art as a science. Normally this is a lesson that's relevant more to video than still images, but the underlying theory is just the same.

For starters, you need to have the same resolution at the end as what you started with - ie if it's 480i input, you should have 480p output, and genuinely so. Not 240p, not some other number, and each of those progressive lines should have clear data of their own rather than being duplicates created after bobbing, interpolated data similarly produced after half of the original has been thrown away, or otherwise over-smoothed. Then if it's video, you also need to have the same update speed... IE, if you started out with 480i60, then you need to end up with 480p60. Not 480p30, and, again, not 240p60, and if you finish with 240p30 and think that's fine then it might be best to just walk away from the idea of digital imaging forever. The correct output for progressive upscaling of interlaced input is the exact same format as what was input, just with the i turned into a p.

The exception being of course where the original format was itself an upscale in some way, e.g. 240 lines doubled to 480, or 30fps content rendered at 60hz (either interlace with the lines being split across the two fields, or progressive with the second frame of a pair being identical to the first), or maybe even both/all three (?). If you can work out, or already know that this is the case, and the only thing that needs to be done to recover the original content is simple decimation in the spatial and/or temporal dimensions, then go right ahead. Like I say, as much of an art as hard-ruled science.

In cases where you have full motion, full resolution interlaced content that was originally captured with a camera (there's not so much generated fully synthetically that can't just be synthesised as progressive straight off the bat, after all), you also have to choose whether you want to then have the progressive result update in an interlaced manner - ie half the lines change in each output frame, corresponding to the active field - which might look more true to the original but also rather weird and distracting after transfer to some other screen/file format, or if you want to try and recreate what it would have looked like if it was recorded full-rate progressive in the first place. The latter can actually be achieved with some success these days, but it requires particular standalone software, or plugins for "proper" video editing/conversion software, that will use motion detection/estimation and such to rebuild the missing data. At the very least, it should render static parts at full rez and quietly switch to bob-weave interpolating the moving ones (ie using the half-resolution, full-motion data, linedoubled with the top line of each pair occurring on either an odd or even line depending on what field it was), which is essentially what your eye and most older LCD TVs did anyway. The best quality ones will do something akin to the fluid-motion framerate upscaling seen in 100/120hz and higher TVs, but at half the framerate rate and only having to fill in for half the lines of each frame instead of all of them. Which, for the relatively simple material that would be the subject in this case, with very little or no 3D component to how it shifts, should work almost magically well.

In any case, don't just flatten it to 30hz-within-60hz so that both fields appear at the same time or something nasty like that, it won't look right (motion will be jerky, moving things will be covered in mouse-combs) and you won't be able to figure out why. The only advantage of doing that is, at least, that the original interlaced appearance can be easily recovered by cutting out the even and the odd lines of each flattened frame to recover the original fields... just gotta make sure you then replay them in the right order...!

* Printed magazine effect ... this is very simple to produce, really, as you essentially just feed the screengrab into the same software routines that would make the photoset masks for the printing press in the first place, which are nothing special. However if what you're trying to convey is "what this would have looked like when played on the original hardware", it won't come out anything close to correct. You're essentially upscaling it several times using nearest-neighbour, then dithering it to nominally 16 colours (really, much more like 8 or 9, unless the black ink is very thin) using a variable-size spot, offset angle technique, rather than the more familiar ordered grid or error-diffusion methods. Something that originally was only used because there was no other way of getting acceptable variable-tone colour (or indeed monochrome) images out of an industrial printing press, and nowadays basically looks like crap unless you're deliberately doing it for a pop-arty instagram-kid stylistic effect. I mean, taking something that, unless it's an olllllllld arcade game, PC88 or very early PC98 screengrab (or maybe TTL CGA/low-rez EGA/default-colour VGA), has an effective palette of at least (8, 16 or more from) 64 and probably more like 512, 4096, 32768+++ colours, and some reasonably fine pixel-by-pixel details, and then mangling it down to just 8/9/16 fixed colours with very blocky dithering, why *wouldn't* it look terrible? Especially if viewed on a screen where the minimum dot size is decidedly larger than that of the printed version (they may rate at only 120 lines per inch or so, but the dots printed along those lines can be much smaller than 1/120th of an inch for lighter shades, so it's not comparable to a laptop screen where the raw resolution is about 120ppi unless the only sizes used are nil (none of that ink applied), max (solid blocks of ink), and 50% (equal spacing of dots and whitespace)). Ironically the only way it could really look decent is if it was displayed at as high a resolution as technically achievable on an otherwise rather low quality CRT with chunky dot pitch and blurry electron gun focus.

If the idea is that it better emulates the appearance of a colour screen that uses phosphor triads rather than trinitron stripes (the latter being something that most "CRT" filters don't bother trying to recreate anyway), then, well, it's an innovative way of going about it, but it's still inaccurate and approaching the idea from the wrong direction. Much better to come up with a colour-dithering method that actually works in the described way instead.

(The problem with all methods of attempting to recreate CRT phosphor texture, and indeed the scanline element of it, is that you either need a very bright display on which to show your final result, or accept that it will always come out looking rather dim and a bit disappointing... because an integral part of it all will be areas of black, or missing/reduced contribution to overall output from one or more of the colour channels, different from and additional to the same suboptimal-brightness-causing elements on the target display, and arising from essentially the same source. A bit like putting two differently manufactured flyscreens in front each other, in front of a window to a sunny summer scene ... no matter what you do, there's going to be moire and interference and less light showing through that combination than with either of them individually, because (unlike with two identical screens from the same batch in the same factory), there's no way that you can really get them to ever line up with each other properly. And if you could, then there'd be no need for the filter, same if you somehow had an LCD monitor with the exact same subpixel striping and resolution as an equivalent Trinitron... a bit of bloom and a faint hint of horizontal scanlines, plus the thin beamgrid support wires, and that'd be all you'd need for a faked up image with no suspension of disbelief required beyond that of the logical picture filling the physical frame *exactly*, which was never originally the case. It's maybe better to just accept that some things aren't meant to be, and that the games were almost certainly NOT intended for play on any particular flavour of monitor, so long as it was in colour (mostly), and both large and sharp enough for the player to be able to see and understand what was going on, so a mild suggestion of reality is more than enough, because it could have varied a lot anyway.)

All that said, I can't see the mentioned effect in the linked screenshots. It's a very colourful looking Megadrive game, and as a result some of the backgrounds have obviously ended up having to be dithered, plus the characters look like they've stepped right out of some dystopian early 90s Bande Dessine, but that's about as much "printed magazine effect" as I can make out. It's otherwise very clean and clear, with frankly unrealistically sharp-edge pixels, even. So, uh.... ?!

(the inlined smaller images, I can't see it either... one has a little blurring at the edges, as if taken from a monitor with poor convergence or using a cheap camera not designed for such close-up work, and the other looks to have a tiny amount of mostly random, slightly periodic visual noise in a limited number of places, as if the video cable was in bad condition, but otherwise nothing I'd call "magazine effect"?)

Whether it's actually as much of a problem as is being imagined, though? Perfection is a nice thing to strive for, but in the case of EG the Sailor Moon picture, I can't see anything hugely wrong with that re: a deliberately retro-styled screengrab. Like, objectively, there's multiple things that need fixed (the shutter to scan sync isn't quite right, the exposure too long/aperture too wide, and the final thing could have done with some moire filtering before being digitally resized), but not in this particular context. And your other examples, assuming they're things you've made for this project rather than just random from-magazine scans (they're damn good quality if so, with just the imperfect contrast and some colour "vibration" in areas of flat medium-strength background giving the game away), already look spot-on.

They were never that great in the first place, in most magazines, and I did sometimes wonder at the time exactly how they obtained them anyhow as the quality could be very variable, between those which had razor-sharp pixels and lush, perfect colours, to barely recognisable smudges, and all points in-between. In comparison, any old screengrab out of an emulator or real-hardware framebuffer, with just the mildest of filtering to smooth off overly sharp edges that wouldn't have looked like that onscreen (and so can cause distracting visual noise when downscaled, rotated, etc), will look pretty glorious, and any discomfort you might feel is because they make the other pics look bad... don't make the mistake of transferring that feeling into "it's because these screengrabs are themselves bad".

Presumably some of the oldskool prints were raw framebuffer grabs either using hacker equipment or files provided by the manufacturer, others taken either using analogue video capture devices, maybe via (S)VHS or pro-grade Beta tape recordings of playthroughs to find the most useful moment to illustrate the reviews with (if you're only printing a few centimetres wide, even VHS looks pretty sharp), or just telecine-type camera rigs with sync-detector circuits that precisely timed the shutter to the monitor scan, carefully locked off and exposure-controlled to get a good shot (or rubbish cameras on plain tripods with zero synch control and not locked off or exposed very well at all) then physically cropped down and re-scanned (or the negatives used as-is in the photoset machines?) after development? Wouldn't be surprised if some just used Polaroids on a shoot-and-pray basis, then cut out the results with scissors and physically glued them to the master copy of each page... In short, don't overly idolise them, they probably put less effort in and had access only to much more inferior hardware and image retouching facilities than you have and do :)

(sidenote: this sort of thing, however, is one reason the Mac line, especially the Mac II, had its first massive flush of success amongst creatives - it was essentially the only brand of computer on which you could carry out the necessary scanning, in-monitor full-colour image editing, and then DTP page compositing with digitised pics, at any price for a good long time... Apple could get away with charging insane amounts for them because no-one else had anything that came close in the graphical horsepower, memory or processor department... and I say that without being any real fan of theirs, it's just something I've learned of their mid-late 80s hardware... it took until the 90s for PCs to start catching up, and Atari/Amiga and the not-even-quasi-PC-compatible Japanese models never really did, ever. That doesn't say anything for the operating system or other aspects of the machinery, but essentially if you wanted to do business or industrial stuff you got a PC, video or broadcast TV material and gaming required an Amiga (or one of the Jap machines if you were hardcore about your home arcade replica), making some low end monochrome publications on the cheap or doing stuff with MIDI music called for an ST, and if you wanted to be a full-colour-glossy pro publisher with lots of photos splashed around then it had to be a Mac... and later, doing things in a music studio with the computer operating also as a sample manipulation and output engine beyond what could be brought about with ST+Amiga, an STe or an ST with a fancy sampler cartridge, prior to the birth of the Falcon and the arrival of actually decent 16-bit PC soundcards, was also Mac)

..........right I think I'd better stop there before I go waaaaaaaaaaaaay too far, instead of just a bit too far.

85 (editado por tahrey 10-10-2017 22:58:39)

Re: Low-res screenshots

Oh, and from what i've seen elsewhere, use of fancy Shader effects for retro emulator scanline filters etc is a bit of a controversial, hot-button topic for a lot of people... some swear by them, some swear AT them and their users. Possibly because for every actually decent example made by someone attempting to produce the most authentic and best looking universal shader, there's several dozen churned out by idiots who have got as far as figuring out how to apply a 1.5-pixel wide gaussian filter or a 2xSai type low-colour interpolation algorithm then slap a simple "skip every other line" scanline effect over the top ... yet think that they've made something unique and amazing, so aggressively promote use of their nasty image-wrecker to all and sundry so it gets picked up by masses who know no better ... with the GOOD shaders ending up withering on the vine because they can't get any recognition.

Anyway, whatever it was you used for that Rocket Knight picture seems pretty good, from a casual glance, so maybe keep using that one?


Re: Low-res screenshots

Wham! You'll have to give me a few days to read it all and properly address every subject -- I'm a bit too busy this week. It seems it's a good moment to split the discussion into several threads, anyway. I'm liking what I'm reading and I'm aware there're many things here which need further elaboration and even corrections, so don't worry.


Re: Low-res screenshots

OK. The PC98 screens thing for now, then. You're referring to this post from 2010:

Recap escribió:

Another thing is PC88 games (and several other computer games I'm sure) that run at different weird resolutions, and with different aspect ratios it seems:


Is that Wer Dragon?

See, that's what I meant. All the CRT games have a 4 : 3 aspect ratio. It just happens that many of them have also black borders, but those are also part of the screenshot itself. You'll find all the interweb emulator screens from PC88 and PC98 games at 640 x 400, but the actual hardware always left two black borders for a 640 x 480 full-screen resolution. So when you find this, for instance (PC98 Digan no Maseki):


...you actually should be editing it in order to get this:


Your proof, in case:


http://postback.geedorah.com/foros/view … 9430#p9430

To which you, to sum it up, object with this:

480 lines is NOT possible in 24khz, let alone something that needs to be deliberately added for some reason. The only reason you might see letterboxing is that it's essentially showing a 16:10 image (8:5, 4:2.5) on a 16:12 (4:3) screen, and if the software was made to use square pixels rather than ones stretched to 120% of normal height, you have to adjust the pic to be a little squashed.

I'm afraid you misunderstood my message there and it's pretty much my fault. There're too many subjects mixed-up in this thread now and my poor wording there (2010!) is not helping to clarify what was being discussed in that moment. See, in actuality, this thread is about web page design, about how to show game screenshots in game reviews. And for that very matter, I advocate a whole real state approach instead of a frame-buffer approach. The user, in the end, doesn't care about the cause of the letterboxing -- he just get black borders or he doesn't, and that affects his experience (full-screen gaming has always been a concern for a reason, as you'll agree).

So when I used the actual hadware phrase in that post I was just trying to remember that everybody is forgetting the PC98 monitor in the equation (which *always* had a 4 : 3 ratio, 'cause nope, *nobody*happened to have an actual 16 : 10 monitor designed for 640 x 400 with square pixels). That is to say, this picture:


...is not there to count pixels, but to explain the game/experience.

You seem pretty versed in PC98 matters (much more than myself, in fact, which is not that hard) so I'm sure this is not necessary, but since that title screen may not be clear enough for everybody and given that this thread seems to have more relevance than even I am aware of, let's do a bit of hotlinking:







(Thanks, Tokugawa)

Give it a try -- you just won't find a single PC98 game (not one that matters, at least; and I'm not counting 9821 games since that's another platform altogether) which looks more natural (or just plain correct) with its graphics using a 4 : 3 ratio (though for not few cases, I'm sure, it won't matter much and therefore this distortion will come off more convenient). And yeah, even if on the original hardware, you have to adjust the pic to be a little squashed. Notice I indeed leave apart the aforementioned black borders when talking about graphics.

So those pics above from random PC98 games are not actually screenshots, but merely full-frame, 1 : 1 graphics. Semantics; what can you do.

Of course, we're talking of analog display technology, so ultimately it's up to the user's preferences and settings, but, even if I myself am not too anal with aspect ratios when gaming (not as little as to turn unmissable circles into flagrant ovals, though!), I think you can't skip the as intended part when doing game review.

This leads us to the part of your dissertation where I disagree:

So, burning-in letterboxing to 640x400 PC98 images is just the same as doing similar to bulk out 640x200 to 640x240 or 320x200 to 320x240... or indeed 640x200 to 640x480 instead of just stretching to 640x400.

Editing the full frame is editing the full frame no matter how, I guess, but I'm not sure this approach is useful beyond hardware analysis and pixel count. When you burn-in letterboxing to 640 x 400 for a 640 x 480 screenshot or say, EGA's 320 x 200 for a 320 x 240 one, you're doing that because you know the original display used (or should use) square pixels and that kind of mimics what you see on screen. When you do it with a 640 x 200 for a 640 x 480 screenshot -- to illustrate:


...you're doing... nothing which makes sense, no matter the approach.

Stretching (line-doubling, whatever) to 640 x 400 is not very different, if you ask me. On the one hand, you distort the pixel so that the picture's integrity is lost and on the other, you still don't represent the original letterboxing. Two wrongs don't make a right. At least, add a scanlines effect.

Anyway, the hardware point of view is utterly interesting, and I want to thank you for the heads-up. The PC-9801 series (much like most Japanese PCs) is not too well documented after all, and the video thingie likely suffers the most from that. I didn't know for certain that it had a 15-kHz mode, for instance, and always thought it didn't make much sense since the main reason behind the computer's conception was better hardware for kanji usage than the PC-88 series (and also, you never find 15-kHz games on it, despite all the direct ports from the 88 Mk-II SR without graphic alterations). But that makes for another thread, I'd say. In fact, it's perfect material for a new thread on Eiusdemmodi [ > ], given its relevance in the emulation field. One of the things pending there is documenting the video aspects of every relevant gaming system so that we can pick later the best emulators currently for CRT usage and properly configure them, and the PC-9801 series is as good as any other to begin the task. Hopefully at this point you've heard of CRT Emudriver and Calamity's work, but if you haven't, I'm sure you'll enjoy what you'll find there. And you'll find out that there's no need to explain stuff like horizontal scan rate, blanking lines and other CRT technicalities which, simply, were required beforehand for what we're attempting there. So if you're willing to participate, I can create an account for you -- automatic registration is disabled.

If you aren't, please, let me know if you wouldn't mind me asking you some questions about the subject in another thread here.

As for the Atari monitor thing, have you tried asking here?