Saturday, June 22, 2019

Gospel Video Teaching Using VLC with Timestamped Playback for Clips

When teaching or facilitating a discussion I have found it challenging to use a video clip without losing momentum and/or causing unnecessary distraction from the lesson being taught.
Ideally, video and audio clips should increase the Spirit of the Lord when used.  I've found that having clips previously downloaded and queued helps solve this challenge.

In order to create a playlist file in this manner you'll need VLC and a text editor.  You'll also need to have the media file(s) downloaded and saved in the same directory/folder as the playlist M3U file.

I've downloaded a video file from the April 2019 General Conference address given by a member of the Quorum of the Twelve Apostles, Elder Bednar, saving it as filename


I've created a text file called
in the same folder:
There are 4 "clips" or portions of the video I would like to share during the lesson.  By converting the playback timestamp from minutes:seconds to seconds I've come up with the following stop and start timestamps:


The above is the content of the

VLC allows playback to be paused and played using the spacebar key by default.  To jump between clips (N)ext and (P)revious can be used (the letters N and P on the keyboard).

Double-clicking the file
2019-06-23_Temple_and_Family_History_Lessson.m3uwill launch VLC using the playlist of clips using the timestamps specified.

The VLC interface can be further customized via Tools >> Preferences to play full screen by default to minimize distraction. 

VLC can also launch paused by default.

Tuesday, February 19, 2019

SQL Server Backup Solution? Pick Five. Or Six.

SQL Server arguably needs 5 or 6 of these things for backups:
picture of bamboo rice steamer
When you see shapes that remind you of SQL Server...
  1. SQL native backup compression support (less data loss/lower RPOs)

    Please don't ever ask to disable native backup compression on SQL Server backups.  In my experience with SQL Server(which I'll admit is barely over a decade), as soon as a proprietary filter level driver, proprietary stored procedure, agent software, or Virtual Backup Device Interface(VDI) is introduced the feeling of confidence in your backups - and the ability to manage/test/quickly restore them - begins to fade.  I'm looking at you, forever incrementals, and buzzword solutions with zones and boosts.  Same with magic metadata/bitmaps/snapshots/clones - even if they have their own cmdlets.

    And what's the cost per GB?  And the ongoing cost?
  2. Encryption At Rest (because compliance)
  3. High Sequential I/O Throughput (less data loss/lower RPOs/lower RTOs[faster restores])

    Test your storage - can it sustain 1 GB(that's gigabyte) per second of throughput on backup or restore when watching the network traffic and local I/O on a given volume?  More?  How many physical interfaces are you using?  What type?  10gigabit?  VMs eventually map to physical network interfaces.  If/when the interfaces are saturated on a given SQL Server how does it affect the server performance?  Does SQL Server allow new connections at this point?  See number 6.

    What hardware is being used for your backup solutions?  Mechanical drives?  Why not flash instead?  Which CPUs?  How much memory?  How much read and write cache and on what media?  If a vendor isn't forthcoming with the hardware specs I see it as an indication 🚩 there is something to hide.
  4. Share/filesystem security (NTFS ACLs work)
    Not a big fan of IP whitelisting, as it's harder to protect your backups.  Your employer could cease to exist if data is exfiltrated - not just your job.
    Doraemon translates backupspeak.
  5. HA(clustered for patching / ~99.9% uptime or higher)
    What good is awesome throughput if the backup target has to be taken offline every few months or weeks to be patched?  Where will backups be written when the backup target is offline?
  6. Recommended: Dedicated VLAN/interface(s) for backup traffic
    It's reassuring to know that backups and restores aren't going to prevent a legitimate connection from being established to an instance of SQL Server.
  7. optional: Dedupe (to save space)
    Windows Server dedupe can give you a 3:1 ratio on SQL compressed backups.  Really.  I've seen this on a VLDB over time(and it was also heavily using index compression of the page variety).
  8. optional: Replication to another datacenter
    What happens when a fiber-seeking backhoe decides to take out a preferred datacenter for 36 hours?
It seems database backups often get lumped in with some other backup appliance until that backup appliance can't handle the load/storage capacity/dedupe.  How much are you paying for SQL Server?  Should it have its own dedicated backup solution?

Grant Fritchey has a great video(more of a public service announcement) about testing backups, and Chrissy LeMaire has a post on building a dedicated server for testing your backups.  Brent Ozar has a post from 2009 re: dedupe and backups that has been commented on for over 8.5 years.

Friday, June 08, 2018

Why not use the Standard Edition of SQL Server?

The question invariably comes up - can't we just use the Standard Edition of SQL Server instead of the more expensive Enterprise Edition?

Here are some considerations:

1. Arguably the biggest advantage of Enterprise Edition is increased density - more VMs and more databases can be run on a given physical host using more cores and more memory.

2. Without Enterprise Edition you cannot compress your indexes, which means that more space for data pages will be consumed in memory, decreasing server density.

3. You may have experienced what happens when Standard Edition is forced as a solution - poor designs leading to increased customer risk and increased downtime.  A single outage having to do with indexing when online rebuilds were not available can justify the cost difference, if cost was the only consideration.

4. If HA and high performance and server density and additional failover options in the event of catastrophic outages are not needed then Enterprise edition may not be needed.

5. Enterprise Edition provides better storage performance.
Lego Superman prefers the more powerful
SQL Server Enterprise Edition.

6. Only 2 cores will be used when a batch mode query goes parallel.  This can affect application performance when querying the database.  Glenn Berry states the following:

  • Degree of Parallelism: Batch mode queries are limited to MAXDOP = 2 on Standard Edition. Enterprise Edition can use all of the cores that are present for the instance. This can be very significant on larger queries on typical, modern server hardware.
7. With Standard Edition - hard CPU limits:  If you have a server with more than 24 physical cores Standard Edition will not use the additional cores. (VM or Physical)  Standard Edition will use the lesser of 4 sockets or 24 cores.  I learned this from attending Glenn Berry's pre-con at SQL Intersection in March 2018.

8. With Standard Edition you will not be able to encrypt an entire database without changing application code(Transparent Data Encryption/TDE).  Security conscious customers will ask for this.

9. Enterprise Edition AlwaysOn Availability Groups can be configured to mitigate zero-day attacks by having replicas span multiple operating systems.

10. AlwaysOn Availability Groups provide synchronous or asynchronous readable replicas that can be used for up to the second reporting on production OLTP systems while preserving performance for external facing customers of those systems.

11. One of our team's preferred design patterns is using a CNAME pointing to AG listener.  Use of CNAMEs with AlwaysOn Availability Groups allows connection strings to remain unaltered for future parallel buildouts and additional replicas.

A valued co-worker put it this way: Our database production environments are established and maintained based on best in class solutions.