Category: Tips & Tricks

Beware “Phishy” Emails

Jun 18, 2020 by Sam Taylor

By Wassef Masri

When the accounting manager at a major retail US company received an email from HR regarding harassment training, he trustingly clicked on the link. Had he looked closer, he could’ve caught that the source was only a look-alike address. Consequently, he was spear-phished.

The hackers emailed all company clients and informed them of a banking account change. The emails were then deleted from the “sent” folder. By the time the scam was discovered a month later, $5.1 Million were stolen.

As in the previous crisis of 2008, cyber-crime is on the rise. This time however, hackers are higher in numbers and more refined in techniques. Notably, the emergence of malware-as-a-service offerings on the dark web is giving rise to a class of non-technical hackers who are better at marketing and social engineering skills.

Phishing emails are the most common attack vector and are often the first stage of a multi-stage attack. Most organizations today experience at least one attack a month.

What started as “simple” phishing that fakes banking emails has evolved into three types of attacks that increase in sophistication:

  • Mass phishing: Starts with a general address (e.g. “Dear customer”) and impersonates a known brand to steal personal information such as credit card credentials.
  • Spear phishing: More customized than mass phishing and addresses the target by his/her name, also through spoofed emails and sites.

  • Business Email Compromise (BEC): Aka CEO fraud, is more advanced because it is sent from compromised email accounts, making them harder to uncover. They mostly target company funds.

How to Protect Against Phishing?

While there is no magical solution, best practices are multi-level combining advanced technologies with user education:

1. User awareness: Frequent testing campaigns and training.

2. Configuration of email system to highlight emails that originate from outside of the organization

3. Secure email gateway that blocks malicious emails or URL’s. It includes:

  • Anti-spam
  • IP reputation filtering
  • Sender authentication
  • Sandboxing
  • Malicious URL blocking

4. Endpoint security: The last line of defense; if the user does click a malicious link or attachment, a good endpoint solution has:

  • Deep learning: blocks new unknown threats
  • Anti-exploit: stops attackers from exploiting software vulnerabilities
  • Anti-ransomware: stops unauthorized encryption of company resources

It is not easy to justify extra spending especially with the decrease in IT budgets projected for 2020. It is essential however to have a clear strategy to prioritize action and to involve organization leadership in mitigating the pending threats.

Leave a comment or send an email to wmasri@crossrealms.com for any questions you might have!

Tips and Tricks with MS SQL (Part 10)

Mar 26, 2020 by Sam Taylor

Cost Threshold for Parallelism? A Simple Change to Boost Performance

Many default configuration values built-in to Microsoft SQL Server are just long-standing values expected to be changed by a DBA to fit their current environment’s needs. One of these configs often left unchanged is “Cost Threshold for Parallelism” (CTFP). In short, this determines, based on determined query cost (i.e.. estimated workload of a query plan) it’s availability to execute in parallel with multiple CPU threads. A higher CTFP value limits queries to run parallel unless it’s cost exceed the set value.  

Certain queries may be best suited to run on single-core performance, while others would benefit more from parallel multi-core execution. The determination of this is based on many variables, including the physical hardware, type of queries, type of data, and many other things. The good news is that SQL’s Query Optimizer helps makes these decisions by using those queries’ “cost” based on the query plan they execute. Cost is assigned by the cardinality estimator.. more on that later.

Here’s our opportunity to optimize the default CTFP value of 5. The SQL Server algorithm (cardinality estimator) that determines query plan cost changed significantly from SQL Server 2012 to present day SQL Server 2016+. Increasing this value to a higher number will allow the query to run via single-core performance which is generally faster than multi-core performance (referencing the top commercial grade CPUs). The common consensus on almost every SQL tuning website, including Microsoft’s own docs, suggests this value should be increased; common agreement as the value of 20 to 30 being a good starting point. Compare your current query plan execution times, increase CTFP, compare new times, and repeat until the results are most favorable.

Since my future blog posts in this series will become more technical, right now’s a perfect time to get your feet wet. Here’s two different methods you can use to make these changes.

Method 1: T-SQL

Copy/Paste the following T-SQL into a new query Window:

            USE [DatabaseName] ; — This database where this will be changed.

            GO

            EXEC sp_configure ‘show advanced options’ , 1 ; — This enables CTFP to be changed

            GO

            RECONFIGURE

            GO

            EXEC sp_configure ‘cost threshold for parallelism’, 20 ; — The CTFP value will be 20 here

            GO

            RECONFIGURE

            GO

Method 2: GUI

To make changes via SQL Server Management Studio:

            1. In Object Explorer – Right Click Instance – Properties – Advanced – Under “Parallelism” change value for “Cost Threshold for Parallelism” to 20

            2. For changes to take effect, open a query window a run “RECONFIGURE” and execute query.

If you’d like to learn how to see query plan execution times, which queries to compare, and how to see query costs, leave a comment or message me. Keep a look out for my next post which will include queries to help you identify everything I’ve covered in this blog series so far. Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks With MS SQL (Part 9)

Mar 18, 2020 by Sam Taylor

Backups Need Backups

This week I’ve decided to cover something more in the style of a PSA than dealing with configurations and technical quirks that help speed up Microsoft SQL servers. The reason for the change of pace is from what I’ve been observing lately. It’s not pretty.

Backups end up being neglected. I’m not just pointing fingers at the primary backups, but where are the backup’s backups? The issue here is – what happens when the primary backups accidentally get deleted, become corrupt, or the entire disk ends up FUBAR? This happens more often than people realize. A disaster recovery plan that doesn’t have primary backups replicated to an offsite network or the very least in an isolated location is a ticking time bomb.

A healthy practice for the primary backups is to verify the integrity of backups after they complete. You can have Microsoft SQL Server perform checksum validation before writing the backups to media. This way if the checksum value for any page doesn’t exactly match that which is written to the backup, you’ll know the backup is trash. This can be done via scripts, jobs, or via manual backups. Look for the “Media” tab when running a backup task in SQL Server Management Studio. The two boxes to enable are “Verify backup when finished” and “Perform checksum before writing to media”.

It’s true we’re adding extra overhead here and might take backups a bit longer to finish. But I’ll leave it up to you to decide if the extra time is worth having a working backup you can trust to restore your database or a broken backup wasting precious resources. For the sake of reliability, if you decide time is more important, then at least have a script perform these reliability checks on a regular basis or schedule regular restores to make sure they even work.

If you follow this advice you can rest easy knowing your data can survive multiple points of failure before anything is lost. If the server room goes up in flames, you can always restore from the backups offsite. If you need help finding a way to have backup redundancy, a script to test backup integrity, or questions about anything I covered feel free to reach out. Any questions, comments, or feedback are always appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Helpful Tips for Remote Users in the Event of a Coronavirus Outbreak

Mar 3, 2020 by Sam Taylor

Remember: Planning ahead is critical.

In response to recent news, we have a few reminders to assist with your remote access preparedness to minimize the disruption to your business. 

Remote Access

Make sure your users have access to and are authorized to use the necessary remote access tools, VPN and/or Citrix.  If you do not have a remote access account, please request one from your management and they can forward their approval to IT.

Email

If you are working from home and are working with large attachments, they can also be shared using a company approved file sharing system such as Office 365’s OneDrive, Dropbox or Citrix Sharefile. Make sure you are approved to use such service and have the relevant user IDs and passwords.  Its best to test them out before you need to use them. Make sure to comply with any security policies in effect for using these services.

Office Phone

Ensure continued access to your 3CX office phone by doing either of these things:

  1. Installing the 3CX phone software on your laptop, tablet or smartphone
  2. Forward your calls to your cell or home phone. Remember you can also access your work voice mail remotely. 

Virtual Meetings

Web meetings or video conferences become critical business tools when working remotely.  Make sure you have an account with your company web meeting/video service, with username and password.  It is a good idea to test it now to ensure your access is working correctly.

Other Recommendations

Prepare now and notice the information and supplies you need on a daily basis.  Then bring the critical information and supplies home with you in advance so you have them available in the event you need to work remotely.  Such items may include:

  1. Company contact information including emergency contact info (including Phone numbers)

  2. Home office supplies such as printer paper, toner and flash drives.

  3. Mailer envelopes large enough to send documents, etc.

  4. Make note of the closest express mailing location near your home and company account information if available

CrossRealms can help set up and manage any or all of the above for you so you can focus on your business and customers.

If you are a current CrossRealms client, please feel free to contact our hotline at 312-278-4445 and choose No.2, or email us at techsupport@crossrealms.com

We are here to help!

Tips and Tricks with MS SQL (Part 8)

Dec 23, 2019 by Sam Taylor

Tame Your Log Files!

By default, the recovery model for database backups on Microsoft‘s SQL Server is set to “full”. This could cause issues for the uninitiated. If backups aren’t fully understood and managed correctly it could cause log files to bloat in size and get out of control. With the “full” recovery model, you get the advantage of flexibility in point-in-time restores and high-availability scenarios, but this also means having to run separate backups for log files in addition to the data files.

 

To keep things simple, we’ll look at the “simple” recovery model. When you run backups, you’re only dealing with data backups whether it’s a full or differential backup. The log file, which holds transactions between full backups, won’t be something you need to concern yourself with unless you’re doing advanced disaster recovery, like database mirroring, log shipping, or high-availability setups.

 

When dealing with a “full” recovery model, you’re not only in charge of backing up the data files, but the log files as well. In a healthy server configuration, log files are much smaller than data files. This means you can run log backups every 15 minutes or every hour without much IO activity as a full or differential backup. This is where you get the point-in-time flexibility. This is also where I often see a lot of issues…

 

Log files run astray. A new database might be created or migrated, and the default recovery model is still in “full” recovery mode. A server that relies on a simpler setup might not catch this nor have log backups in place. This means the log file will start growing exponentially, towering over the data file size, and creating hordes of VLFs (look out for a future post about these). I’ve seen a lot of administrators not know how to control this and resort to shrinking databases or files – which is just something you should never do unless your intentions are data corruption and breaking things.

 

My advice here is keep it simple. If you understand how to restore a full backup, differential backups, and log backups including which order they should be restored in and when to use “norecovery” flags,  or have third-party software doing this for you, you’re all set. If you don’t, I would suggest setting up log backups to run at regular and short interval (15 mins – 1 hour) as a precaution and changing the database recovery models to “simple”. This can keep you protected when accidentally pulling in a database that defaulted to the “full” recovery model and having its log file eat the entire disk.

 

Pro Tip: Changing your “model” database’s recovery model will determine the default recovery model used for all new databases you create.

 

Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 7)

Dec 6, 2019 by Sam Taylor

Quickly See if Ad Hoc Optimization Benefits Your Workloads​

A single setting frequently left disabled can make a huge performance impact and free up resources. The setting is a system-wide setting that allows Microsoft SQL Server to optimize it’s processes for “Ad Hoc” workloads. Most SQL Servers I come across that rely heavily upon ETL (Extract – Transform – Load) workloads for their day-to-day would benefit from enabling “Optmize for AdHod Workloads” but often don’t have the setting enabled.

If you perform a lot of ETL workloads and want to know if enabling this option will benefit you, I’ll make it simple. First we need to determine the percentage of your cache plan that runs Ad Hoc. To do so just run the following T-SQL script in SQL Server Management Studio:

SELECT AdHoc_Plan_MB, Total_Cache_MB,

        AdHoc_Plan_MB*100.0 / Total_Cache_MB AS ‘AdHoc %’

FROM (

SELECT SUM(CASE

            WHEN objtype = ‘adhoc’

            THEN size_in_bytes

            ELSE 0 END) / 1048576.0 AdHoc_Plan_MB,

        SUM(size_in_bytes) / 1048576.0 Total_Cache_MB

FROM sys.dm_exec_cached_plans) T

After running this, you’ll see a column labelled “AdHoc %” with a value. As a general rule of thumb, I prefer to enable optmizing for Ad Hoc workloads when these values are between 20-30%. These numbers will change depending on the last time the server was reset so it’s best to check after the server has been running for at least a week or so. Changes only go into affect for new cached plans created. For the impatient, a quicker way to see the results of the change require restarting SQL Services to clear the plan cache.

Under extremely rare circumstanes this could actually hinder performance. If that’s the case just disable Ad Hoc and continue on as you were before. As always, feel free to ask me directly so I can help. There isn’t any harm in testing if this benefits your environment or not. To enable optmiziation, right click the SQL Instance from SQL Server Management Studio’s Object Explorer à Properties à Advanced à Change “Optmize for Ad Hoc Workloads” to “True” à Click “Apply”. From there run the query “RECONFIGURE” to put the change into action.

Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 6)

Dec 6, 2019 by Sam Taylor

Increase the Number of TEMPDB Data Files

If you’re having issues with queries that contain insert/update statements, temp tables, table variables, calculations, or grouping or sorting of data, it’s possible you’re seeing some contention within the TEMPDB data files. A lot of Microsoft SQL servers I come across only have a single TEMPDB data file. That’s not a Best Practice according to Microsoft. If you have performance issues when the aforementioned queries run it’s a good idea to check on the number of TEMPDB files you have because often times just one isn’t enough.

 

SQL Server places certain locks on databases, including TEMPDB, when it processes queries. So, if you have 12 different databases all running queries with complex sorting algorithms and processing calculations of large datasets, all that work is first done in TEMPDB. A single file for TEMPDB doesn’t only hurt performance and efficiency but can also slow down other processes running alongside it by hogging resources and/or increased wait times. Luckily, the resolution is super simple if you’re in this situation.

 

Increase the number of data files in TEMPDB to maximize disk bandwidth and reduce contention. As Microsoft recommends, if the number of logical processors is less than or equal to 8 – that’s the number of data files you’ll want. If the number of logical processors is greater than 8, just use 8 data files. If you’ve got more than 8 logical processors and still experience contention, increase the data files by multiples of 4 while not exceeding the number of logical processors. If you still have contention issues, consider looking at your workload, code, or hardware to see where improvements can be mode.

 

PRO TIP: When you increase the number of your TEMPDB data files (on its separate drive… remember?) take this time to pre-grow your files. You’ll want to pre-grow all the data files equally and enough to take up the entire disk’s space (accounting for TEMPDB’s log file).

 

Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 5)

Dec 6, 2019 by Sam Taylor

Separate Your File Types

It’s too common and important of an occurrence to not mention the need for file separation in this series. If you’re running Microsoft SQL Server of any version, it’s important you separate your file types to different logical or physical locations. “Data” files, “Log” files, and “TEMPDB” files shouldn’t ever live in the same logical drive. This has a big impact on performance and makes troubleshooting issues much harder to isolate when it comes to finding read/write contention as a suspect.

It’s understandable, the quick need of a SQL Server pops up and you install a Development Edition or Express Edition in 10 minutes leaving file types to their default locations. However, once this system becomes a production server, you better know how to relocate these files to new locations or do it right the first time around. It’ll be easier earlier on rather than after the data grows and needs a bigger maintenance window to move.

To keep with Microsoft Best Practices, you can use a drive naming convention similar to what I’ve listed below to help remember where to place your files. If you’re fortunate enough to have physical drive separation, all the power to you. For most servers I see in this situation, it’s best to start with logical separation at a minimum to yield some powerful results.

Filetype Mapping:

– C:\ – System Databases (default MS SQL installation location)

– D:\ – Data Files

– L:\ – Log Files

– T:\ – TEMPDB Files

– B:\ – Backup Files (with redundancy of course…)

Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 4)

Dec 6, 2019 by Sam Taylor

Don't Forget to Enable "IFI" on New Installations​

Instant File Initialization (IFI) is a simple feature with performance benefits often left behind on installations of SQL Server that have seen their share of upgrades or migrations. If it wasn’t available in previous versions of Windows Server or Microsoft SQL Server, there’s a good chance someone unfamiliar with its purpose didn’t enable it during an upgrade. Why risk enabling a new feature to a system that’s been stable and passed the test of time? During installations of SQL Server 2016 onwards, this presents itself as the “Grant Perform Volume Maintenance Task” checkbox SQL Server asks you to check on or leave off (1). It can be enabled in older SQL versions as well, though by different means.

The benefits of enabling this means being able to write data to disk faster. Without IFI enabled, anytime SQL Server needs to write to disk it first must zero out previously deleted files and reclaim any space on the disk that was once used. This happens anytime a new database is created, data or log files are added, database size is increased (including autogrowth events), and when restoring a database. Enabling the IFI feature can bypass this “overwriting the disk with zeros” process used in the Windows file initialization process. The resulting benefits to disk performance compound as data grows and especially when non-solid-state media is used.

An analogy to what’s happening here is when you’re formatting a USB thumb drive and being presented with “Perform a Quick Format” checkbox. This would be like enabling IFI where Windows basically just claims all the diskspace quickly and lets you go about your day. Without the Quick Format, Windows goes through and writes zeros to each sector of the drive (which also reveals bad sectors – but unrelated to SQL’s IFI usage) which takes much longer. It’s essentially writing enough to cover all available space, hence taking longer. You’ve probably noticed these differences in formatting speeds before. The performance benefit of Quick Format is like SQL Server with IFI enabled. It’s becomes more evident as the size of storage or data increases.

Note (1) : If you’re using a SQL Domain User Account as a Service Logon Account instead of the service account (NT Service\MSSQLSERVER) SQL Server defaults to, you’ll need to grant the account “Perform Volume Maintenance Tasks” separately under the “Local Policies”. Double check your SQL service account has this right granted to be safe. For instructions on granting permissions, you can follow Microsoft’s documentation here.

If you want to know other ways to enable IFI on your server without the re-installation SQL or want to know how to check if IFI is enabled, feel free to reach out.  Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 3)

Dec 6, 2019 by Sam Taylor

Edit Change Database Auto-Growth from Percent-Based to Fixed-Size Growth​

  • Edit
 
 

In the ideal world all the Microsoft SQL Servers I came across would have their databases pre-grown to account for future growth and re-evaluate their needs periodically. Unfortunately, this is almost never the case. Instead, these databases rely on SQL’s autogrowth feature to expand their data files and log files as needed. The problem is the default is set to autogrow data files by 1MB(*) and log files by 10%(*).

Since this was such a big issue with performance, Microsoft made some changes in SQL Server 2016 onward, where the data files and log files will default to a growth of 64MB each. If your server is still using the 1MB autogrowth for data and 10% autogrowth for logs, consider using Microsoft’s new defaults and bump it up to at least 64MB.

Growing a data file by 1MB increments means the server must do extra work. If it needs to grow 100MB – it must send over 100 requests to the server to grow 1MB, add data, then ask the server to grow again and repeat. Imagine how bad this gets for databases growing by gigabytes a day! This is even worse if growing by a percentage. This means the server has to do some computing first before it can grow. Growing 10% of 100MB is easy to account for but as the log file grows it can quickly get out of hand and runaway bloating your storage system while adding CPU overhead as an extra kick in the rear!

The change is luckily very simple. Right-click one of the user databases using SQL Server Management Studio and select “Properties”. From there click on the “Files” page. Next expand the “…” button near the “ROWS” cell and change this to 64MB or greater (depending on how much room you have to work with and growth expected). Do the same for the “LOG” file type. That’s it! You’re done and gave your server some well needed breathing room!

Any questions, comments, or feedback are appreciated! Feel free to reach out to aturika@crossrealms.com for any SQL Server questions you might have!