Pastor Jimmy spoke this message a few weeks ago. I just wanted to repost it here – to test my video embedding and because it’s a great message.
So I’ve narrowed down the video switchers I like to two. Just so everyone knows, this is just for research purposes right now, but I’m trying to figure out which switcher would suit Journey best in the future, as well as what would work at Southeastern. I like these two.
Panasonic B AV-HS400
This is my favorite choice. If we go HD-SDI, it’s a must. Making it work on all analog component would be more pricey, though, since we’d need more expansion cards. We’d already have to get one for computer input (or get an card for our computer to output in the right format). It does some awesome stuff, though. Our cameras to HD-SDI output at 1080i, so that would work.
This is our other option. It has a SD side on the left, which is then mixed down and up-converted to HD on the right. It takes component, composite, and VGA and can output component. Another workable option. Our cameras can also output component.
So I just finished editing the Saturday evening sermon at Journey and I’m waiting on the video to export. I figured now would be a perfect time to inform everyone that we do this every week – record Jimmy on Saturday night, edit in the lower thirds in (the verses that appear at the bottom on the video at Northwest and on the web), export the raw footage, copy over the video to the Northwest iMac, setup the rest of the songs in ProPresenter on the Northwest iMac, and burn a backup DVD in case the computer dies during the sermon. Sound like a lot? It is. And now its time to go home.
Why do your graphic supers have funny jagged edges in Photoshop, but they look fine on (television) screen? For that matter, how can anamorphic formats cram so much width into a regular NTSC-type signal? The answer, simply put, is that pixels come in all shapes and sizes.
Recall that pixels are the individual points of color that make up a picture on your screen. While computer screens and similar displays usually use pixels that are square, televisions, historically, have not. In fact, the concept of a “pixel” didn’t figure into analog television signals at all — the NTSC specification called for 480 “lines,” but the signal within those lines did not specify discrete units of width.
When the notion of digital video became a reality, the standards bodies that be decided that — for both NTSC and PAL — there would be exactly 720 pixels per line. Thus, the 480i resolutions we know and love: 720×480 NTSC, and 720×576 PAL.Now, in order for video rendered in the new 720x___ proportions to look the same as it always had on analog screens, it didn’t make sense to think of the 720 dots on each row as square. NTSC video, for example, was customarily rendered at a ratio of 4 units wide by 3 units tall. That translates to 640 pixels wide for every 480 pixels tall — not 720.The solution, then, was to render pixels as non-square: about 0.9 units wide for every unit tall, in the case of NTSC video (and about 1.09:1 for PAL). When encoding widescreen video as anamorphic DV, the ratio became skewed to “fat” pixels — 1.21:1.
Fortunately for all of us, modern standards like HD have evolved in an age where digital editing and dissemination are the norm. HD standards were drawn up with square pixels in mind, so pixel aspect ratios are unimportant when considering fully native HD workflows. But unfortunately, HDV at 1080i — with a native resolution of 1440×1080 to represent HD’s 1920×1080 — assumes fat pixels just as its predecessor DV formats did, this time at a ratio of 1.33:1.
Of course, modern imaging tools like Photoshop and After Effects ship with a wide array of presets fully appropriate to each type of native footage. As long as you realize that these presets involve more than just codecs and pixel resolution, you should avoid nasty surprises involving “squished” graphics.