*rolls through other posts on this sub-page (ie since 25th feb this year)*
hmmm...
* Cleartype, I find really nice on actual LCDs, it's just a shame that MS implemented it at the screen-compositing / initial text rendering level rather than as something in the graphics drivers. Having it appear in screenshots is a really crappy side effect of something that, by rights, should be dealt with more by rendering the text to the hardware in monochrome with 3x the width and letting the GPU chop it up into individual channels for display, so it still comes out as plain antialiased greyscale in screenshots (to say nothing of the chromatic aberration caused with coloured text...). On CRTs and projectors, and LCDs not properly tuned-in to it (or turned portrait, etc) it's hopeless and only regular antialiasing should be used.
It's not a problem of the actual subpixel rendering concept, just a rather cheap and careless implementation of it. Though doing it "properly" would probably require a hell of a lot of work and redesigning certain core OS components at a pretty basic level. Let's just wait it out until 4K becomes the standard for everything, then it will be moot anyway...
(In the meantime, it's not exactly hard to turn off... problem is, not all software actually pays attention to that setting any more!)
* Recreating the interlace "shimmer" effect need not turn the result into a movie. You only need two frames, with relatively subtle differences between them. So long as you don't need more than 256 colours overall, you could make a 2-frame GIF. (OK, it's imperfect, not least because Compuserve were stupid enough to define its playback rate in terms of 1/100th-sec delays so you can pick 14.3, 16.7, 20, 25, 33.3, 50 and 100fps, but not 60, 30, 24, or 15, but it can come close and there's still no real alternative thanks to the failure of MNG)
In fact, it's possible to cut up GIFs such that they have multiple "patches" with their own 256 colour palette all drawn near-simultaneously in the same file, with transparency working between them and priority based on order within the file (ie whatever's drawn "last" is on top). So even if your file ends up actually needing a good 4096 colours between the full-brightness ones of the active scanlines, and the darker fading-out / partly lit interlace ones, that's potentially "just" 16 patches so long as there's not too many all in the one area, with an alternating transparency mask blanking out the lines that aren't to be drawn in that stage (well, OK, you'd only get 255 per palette because of having to reserve one as transparent, but you still get 4080 overall). Two sets of those and you can have your shimmering interlaced scanliney image.
Though it might be far more sensible to use a simpler combination with two PNGs (or even one PNG with the image duplicated side-by-side with different lines dimmed/brightened in each) and a bit of javascript to alternate between them at 60hz, so long as you had enough control over where and how they're displayed. Still not a movie, just a picture being displayed in a certain way. Easier then also to pre-render it to look much more natural and realistic, with the proper bloom of the lit lines and the fadeout of the unlit ones. And indeed with three pictures / frames, it could be made universally compatible - the script showing a composited, static, all-lines version of the pic (equivalent to a photograph taken at 1/30th sec shutter speed) first up, as would a copy of the page with JS turned off (and that's what would download if right-clicked, too)... but there would be the option to click on them to turn the interlace shimmer effect on or off, at which point it would transition to alternating between frames 2 and 3 at 60hz, going back to static frame 1 if clicked a second time (because, yknow, it might give people a headache).
* Monitors with too high a resolution, meaning the scanlines get TOO severe... well, this does mean fiddling around with the guts of a sensitive and indeed somewhat dangerous CRT, but if you have the necessary attitude towards safety and can get hold of the relevant service manual or schematics, wouldn't it be enough to just defocus the beam(s) a little? Or if the monitor makes use of deliberate extra-high-frequency vertical oscillation to make the beam seem "taller" without massive horizontal blurring, turning that effect up higher? There should be adjustments available for both after all, as they're things that would have been calibrated at the factory. Ideally a well made and thoughtfully designed monitor should be able to adjust one/other/both in response to the input resolution and scan rate (as well as the beam power, so lower line counts don't end up producing a very dim image from spreading out too few electrons over too large an area).
Be wary of "getting rid of" interlacing because you think progressive is the "correct" way of viewing video content, by the way... that's as maybe if you're in control of the video recording or other signal generation / video memory filling in the first place, so you can be certain that you're writing a nice clean progressive image to the memory and then out to the screen and catching it in the camera, all at a silky smooth high framerate (at least 48hz, preferably 60 or more). If however you're dealing with EXISTING content, "repairing" it in that way can cause as much harm as good. A careful decision has to be made about whether to do that in the first place, and then if you go ahead with it, you have to do it RIGHT. Sympathetic deinterlacing that doesn't visit hard- or impossible-to-undo damage on the underlying image can be as much of an art as a science. Normally this is a lesson that's relevant more to video than still images, but the underlying theory is just the same.
For starters, you need to have the same resolution at the end as what you started with - ie if it's 480i input, you should have 480p output, and genuinely so. Not 240p, not some other number, and each of those progressive lines should have clear data of their own rather than being duplicates created after bobbing, interpolated data similarly produced after half of the original has been thrown away, or otherwise over-smoothed. Then if it's video, you also need to have the same update speed... IE, if you started out with 480i60, then you need to end up with 480p60. Not 480p30, and, again, not 240p60, and if you finish with 240p30 and think that's fine then it might be best to just walk away from the idea of digital imaging forever. The correct output for progressive upscaling of interlaced input is the exact same format as what was input, just with the i turned into a p.
The exception being of course where the original format was itself an upscale in some way, e.g. 240 lines doubled to 480, or 30fps content rendered at 60hz (either interlace with the lines being split across the two fields, or progressive with the second frame of a pair being identical to the first), or maybe even both/all three (?). If you can work out, or already know that this is the case, and the only thing that needs to be done to recover the original content is simple decimation in the spatial and/or temporal dimensions, then go right ahead. Like I say, as much of an art as hard-ruled science.
In cases where you have full motion, full resolution interlaced content that was originally captured with a camera (there's not so much generated fully synthetically that can't just be synthesised as progressive straight off the bat, after all), you also have to choose whether you want to then have the progressive result update in an interlaced manner - ie half the lines change in each output frame, corresponding to the active field - which might look more true to the original but also rather weird and distracting after transfer to some other screen/file format, or if you want to try and recreate what it would have looked like if it was recorded full-rate progressive in the first place. The latter can actually be achieved with some success these days, but it requires particular standalone software, or plugins for "proper" video editing/conversion software, that will use motion detection/estimation and such to rebuild the missing data. At the very least, it should render static parts at full rez and quietly switch to bob-weave interpolating the moving ones (ie using the half-resolution, full-motion data, linedoubled with the top line of each pair occurring on either an odd or even line depending on what field it was), which is essentially what your eye and most older LCD TVs did anyway. The best quality ones will do something akin to the fluid-motion framerate upscaling seen in 100/120hz and higher TVs, but at half the framerate rate and only having to fill in for half the lines of each frame instead of all of them. Which, for the relatively simple material that would be the subject in this case, with very little or no 3D component to how it shifts, should work almost magically well.
In any case, don't just flatten it to 30hz-within-60hz so that both fields appear at the same time or something nasty like that, it won't look right (motion will be jerky, moving things will be covered in mouse-combs) and you won't be able to figure out why. The only advantage of doing that is, at least, that the original interlaced appearance can be easily recovered by cutting out the even and the odd lines of each flattened frame to recover the original fields... just gotta make sure you then replay them in the right order...!
* Printed magazine effect ... this is very simple to produce, really, as you essentially just feed the screengrab into the same software routines that would make the photoset masks for the printing press in the first place, which are nothing special. However if what you're trying to convey is "what this would have looked like when played on the original hardware", it won't come out anything close to correct. You're essentially upscaling it several times using nearest-neighbour, then dithering it to nominally 16 colours (really, much more like 8 or 9, unless the black ink is very thin) using a variable-size spot, offset angle technique, rather than the more familiar ordered grid or error-diffusion methods. Something that originally was only used because there was no other way of getting acceptable variable-tone colour (or indeed monochrome) images out of an industrial printing press, and nowadays basically looks like crap unless you're deliberately doing it for a pop-arty instagram-kid stylistic effect. I mean, taking something that, unless it's an olllllllld arcade game, PC88 or very early PC98 screengrab (or maybe TTL CGA/low-rez EGA/default-colour VGA), has an effective palette of at least (8, 16 or more from) 64 and probably more like 512, 4096, 32768+++ colours, and some reasonably fine pixel-by-pixel details, and then mangling it down to just 8/9/16 fixed colours with very blocky dithering, why *wouldn't* it look terrible? Especially if viewed on a screen where the minimum dot size is decidedly larger than that of the printed version (they may rate at only 120 lines per inch or so, but the dots printed along those lines can be much smaller than 1/120th of an inch for lighter shades, so it's not comparable to a laptop screen where the raw resolution is about 120ppi unless the only sizes used are nil (none of that ink applied), max (solid blocks of ink), and 50% (equal spacing of dots and whitespace)). Ironically the only way it could really look decent is if it was displayed at as high a resolution as technically achievable on an otherwise rather low quality CRT with chunky dot pitch and blurry electron gun focus.
If the idea is that it better emulates the appearance of a colour screen that uses phosphor triads rather than trinitron stripes (the latter being something that most "CRT" filters don't bother trying to recreate anyway), then, well, it's an innovative way of going about it, but it's still inaccurate and approaching the idea from the wrong direction. Much better to come up with a colour-dithering method that actually works in the described way instead.
(The problem with all methods of attempting to recreate CRT phosphor texture, and indeed the scanline element of it, is that you either need a very bright display on which to show your final result, or accept that it will always come out looking rather dim and a bit disappointing... because an integral part of it all will be areas of black, or missing/reduced contribution to overall output from one or more of the colour channels, different from and additional to the same suboptimal-brightness-causing elements on the target display, and arising from essentially the same source. A bit like putting two differently manufactured flyscreens in front each other, in front of a window to a sunny summer scene ... no matter what you do, there's going to be moire and interference and less light showing through that combination than with either of them individually, because (unlike with two identical screens from the same batch in the same factory), there's no way that you can really get them to ever line up with each other properly. And if you could, then there'd be no need for the filter, same if you somehow had an LCD monitor with the exact same subpixel striping and resolution as an equivalent Trinitron... a bit of bloom and a faint hint of horizontal scanlines, plus the thin beamgrid support wires, and that'd be all you'd need for a faked up image with no suspension of disbelief required beyond that of the logical picture filling the physical frame *exactly*, which was never originally the case. It's maybe better to just accept that some things aren't meant to be, and that the games were almost certainly NOT intended for play on any particular flavour of monitor, so long as it was in colour (mostly), and both large and sharp enough for the player to be able to see and understand what was going on, so a mild suggestion of reality is more than enough, because it could have varied a lot anyway.)
All that said, I can't see the mentioned effect in the linked screenshots. It's a very colourful looking Megadrive game, and as a result some of the backgrounds have obviously ended up having to be dithered, plus the characters look like they've stepped right out of some dystopian early 90s Bande Dessine, but that's about as much "printed magazine effect" as I can make out. It's otherwise very clean and clear, with frankly unrealistically sharp-edge pixels, even. So, uh.... ?!
(the inlined smaller images, I can't see it either... one has a little blurring at the edges, as if taken from a monitor with poor convergence or using a cheap camera not designed for such close-up work, and the other looks to have a tiny amount of mostly random, slightly periodic visual noise in a limited number of places, as if the video cable was in bad condition, but otherwise nothing I'd call "magazine effect"?)
Whether it's actually as much of a problem as is being imagined, though? Perfection is a nice thing to strive for, but in the case of EG the Sailor Moon picture, I can't see anything hugely wrong with that re: a deliberately retro-styled screengrab. Like, objectively, there's multiple things that need fixed (the shutter to scan sync isn't quite right, the exposure too long/aperture too wide, and the final thing could have done with some moire filtering before being digitally resized), but not in this particular context. And your other examples, assuming they're things you've made for this project rather than just random from-magazine scans (they're damn good quality if so, with just the imperfect contrast and some colour "vibration" in areas of flat medium-strength background giving the game away), already look spot-on.
They were never that great in the first place, in most magazines, and I did sometimes wonder at the time exactly how they obtained them anyhow as the quality could be very variable, between those which had razor-sharp pixels and lush, perfect colours, to barely recognisable smudges, and all points in-between. In comparison, any old screengrab out of an emulator or real-hardware framebuffer, with just the mildest of filtering to smooth off overly sharp edges that wouldn't have looked like that onscreen (and so can cause distracting visual noise when downscaled, rotated, etc), will look pretty glorious, and any discomfort you might feel is because they make the other pics look bad... don't make the mistake of transferring that feeling into "it's because these screengrabs are themselves bad".
Presumably some of the oldskool prints were raw framebuffer grabs either using hacker equipment or files provided by the manufacturer, others taken either using analogue video capture devices, maybe via (S)VHS or pro-grade Beta tape recordings of playthroughs to find the most useful moment to illustrate the reviews with (if you're only printing a few centimetres wide, even VHS looks pretty sharp), or just telecine-type camera rigs with sync-detector circuits that precisely timed the shutter to the monitor scan, carefully locked off and exposure-controlled to get a good shot (or rubbish cameras on plain tripods with zero synch control and not locked off or exposed very well at all) then physically cropped down and re-scanned (or the negatives used as-is in the photoset machines?) after development? Wouldn't be surprised if some just used Polaroids on a shoot-and-pray basis, then cut out the results with scissors and physically glued them to the master copy of each page... In short, don't overly idolise them, they probably put less effort in and had access only to much more inferior hardware and image retouching facilities than you have and do :)
(sidenote: this sort of thing, however, is one reason the Mac line, especially the Mac II, had its first massive flush of success amongst creatives - it was essentially the only brand of computer on which you could carry out the necessary scanning, in-monitor full-colour image editing, and then DTP page compositing with digitised pics, at any price for a good long time... Apple could get away with charging insane amounts for them because no-one else had anything that came close in the graphical horsepower, memory or processor department... and I say that without being any real fan of theirs, it's just something I've learned of their mid-late 80s hardware... it took until the 90s for PCs to start catching up, and Atari/Amiga and the not-even-quasi-PC-compatible Japanese models never really did, ever. That doesn't say anything for the operating system or other aspects of the machinery, but essentially if you wanted to do business or industrial stuff you got a PC, video or broadcast TV material and gaming required an Amiga (or one of the Jap machines if you were hardcore about your home arcade replica), making some low end monochrome publications on the cheap or doing stuff with MIDI music called for an ST, and if you wanted to be a full-colour-glossy pro publisher with lots of photos splashed around then it had to be a Mac... and later, doing things in a music studio with the computer operating also as a sample manipulation and output engine beyond what could be brought about with ST+Amiga, an STe or an ST with a fancy sampler cartridge, prior to the birth of the Falcon and the arrival of actually decent 16-bit PC soundcards, was also Mac)
..........right I think I'd better stop there before I go waaaaaaaaaaaaay too far, instead of just a bit too far.