Script all indexes for all tables in a database (SQL Script)

The need often arises to create or recreate the indexes for all tables in a database, especially in development and testing scenarios. This article presents a script to generate Index Creation Scripts for all tables in a database usingTransact-SQL (T-SQL).

    CASE WHEN I.is_unique = 1 THEN ' UNIQUE ' ELSE '' END  +   
    I.type_desc COLLATE DATABASE_DEFAULT +' INDEX ' +  + ' ON '  +   
    Schema_name(T.Schema_id)+'.' + ' ( ' +  
    KeyColumns + ' )  ' +  
    ISNULL(' INCLUDE ('+IncludedColumns+' ) ','') +  
    ISNULL(' WHERE  '+I.Filter_definition,'') + ' WITH ( ' +  
    CASE WHEN I.is_padded = 1 THEN ' PAD_INDEX = ON ' ELSE ' PAD_INDEX = OFF ' END + ','  +  
    'FILLFACTOR = '+CONVERT(CHAR(5),CASE WHEN I.Fill_factor = 0 THEN 100 ELSE I.Fill_factor END) + ','  +  
    -- default value  
    'SORT_IN_TEMPDB = OFF '  + ','  +  
    CASE WHEN I.ignore_dup_key = 1 THEN ' IGNORE_DUP_KEY = ON ' ELSE ' IGNORE_DUP_KEY = OFF ' END + ','  +  
    -- default value   
    ' DROP_EXISTING = ON '  + ','  +  
    -- default value   
    ' ONLINE = OFF '  + ','  +  
   CASE WHEN I.allow_row_locks = 1 THEN ' ALLOW_ROW_LOCKS = ON ' ELSE ' ALLOW_ROW_LOCKS = OFF ' END + ','  +  
   CASE WHEN I.allow_page_locks = 1 THEN ' ALLOW_PAGE_LOCKS = ON ' ELSE ' ALLOW_PAGE_LOCKS = OFF ' END  + ' ) ON [' + + ' ] '  [CreateIndexScript]  
FROM sys.indexes I    
 JOIN sys.tables T ON T.Object_id = I.Object_id     
 JOIN sys.sysindexes SI ON I.Object_id = AND I.index_id = SI.indid    
    SELECT IC2.object_id , IC2.index_id ,   
        STUFF((SELECT ' , ' + + CASE WHEN MAX(CONVERT(INT,IC1.is_descending_key)) = 1 THEN ' DESC ' ELSE ' ASC ' END 
    FROM sys.index_columns IC1   
    JOIN Sys.columns C    
       ON C.object_id = IC1.object_id    
       AND C.column_id = IC1.column_id    
       AND IC1.is_included_column = 0   
    WHERE IC1.object_id = IC2.object_id    
       AND IC1.index_id = IC2.index_id    
    GROUP BY IC1.object_id,,index_id   
    ORDER BY MAX(IC1.key_ordinal)   
       FOR XML PATH('')), 1, 2, '') KeyColumns    
    FROM sys.index_columns IC2    
    --WHERE IC2.Object_id = object_id('Person.Address') --Comment for all tables   
    GROUP BY IC2.object_id ,IC2.index_id) tmp3 )tmp4    
  ON I.object_id = tmp4.object_id AND I.Index_id = tmp4.index_id   
 JOIN sys.stats ST ON ST.object_id = I.object_id AND ST.stats_id = I.index_id    
 JOIN sys.data_spaces DS ON I.data_space_id=DS.data_space_id    
 JOIN sys.filegroups FG ON I.data_space_id=FG.data_space_id    
    SELECT IC2.object_id , IC2.index_id ,    
        STUFF((SELECT ' , ' +  
    FROM sys.index_columns IC1    
    JOIN Sys.columns C     
       ON C.object_id = IC1.object_id     
       AND C.column_id = IC1.column_id     
       AND IC1.is_included_column = 1    
    WHERE IC1.object_id = IC2.object_id     
       AND IC1.index_id = IC2.index_id     
    GROUP BY IC1.object_id,,index_id    
       FOR XML PATH('')), 1, 2, '') IncludedColumns     
   FROM sys.index_columns IC2     
   --WHERE IC2.Object_id = object_id('Person.Address') --Comment for all tables    
   GROUP BY IC2.object_id ,IC2.index_id) tmp1    
   WHERE IncludedColumns IS NOT NULL ) tmp2     
ON tmp2.object_id = I.object_id AND tmp2.index_id = I.index_id    
WHERE I.is_primary_key = 0 AND I.is_unique_constraint = 0  
--AND I.Object_id = object_id('Person.Address') --Comment for all tables  
--AND = 'IX_Address_PostalCode' --comment for all indexes  



Relearning How You See the Web

Analyzing how a website fits in its “web neighborhood”

Viewing websites like an SEO Assessing good site architecture and webpages from an SEO perspective Assessing website content like an SEO When people surf the Internet, they generally view each domain as its own island of information. This works perfectly well for the average surfer but is a big mistake for beginner SEOs. Websites, whether they like it or not, are interconnected.

This is a key perspective shift that is essential for understanding SEO. Take Facebook, for example. It started out as a “walled garden” with all of its content hidden behind a login. It thought it could be different and remain completely independent. This worked for a while, and Facebook gained a lot of popularity. Eventually, an ex-Googler and his friend became fed up with the locked-down communication silo of Facebook and started a wide open website called Twitter. Twitter grew even faster than Facebook and challenged it as the media darling. Twitter was smart and made its content readily available to both developers (through APIs) and search engines (through indexable content). Facebook responded with Facebook Connect (which enables people to log in to Facebook through other websites) and opened its chat protocol so its users could communicate outside of the Facebook domain. It also made a limited amount of information about users visible to search engines.

Facebook is now accepting its place in the Internet community and is benefiting from its decision to embrace other websites. The fact that it misjudged early on was that websites are best when they are interconnected. Being able to see this connection is one of the skills that separates SEO professionals from SEO fakes.

I highly recommend writing down everything you notice in a section of a notebook identified with the domain name and date of viewing.
In this chapter you learn the steps that the SEO professionals at SEOmoz go through either before meeting with a client or at the first meeting (depending on the contract). When you view a given site in the way you are about to learn in this chapter, you need to take detailed notes. You are likely going to notice a lot about the website that can use improvement, and you need to capture this information before details distract you.

Keep Your Notes Simple

The purpose of the notebook is simplicity and the ability to go back frequently and review your notes. If actual physical writing isn’t your thing, consider a lowtech text editor on your computer, such as Windows Notepad or the Mac’s TextEdit. Bare-bones solutions like a notebook or text editor help you avoid the distraction of the presentation itself and focus on the important issues—the characteristics of the web site that you’re evaluating.
If you think it will be helpful and you have Internet access readily available, I recommend bringing up a website you are familiar with while reading through this chapter. If you choose to do this, be sure to take a lot of notes in your notebook so you can review them later.

The 1,000-Foot View—Understanding the Neighborhood Before I do any work on a website I try to get an idea of where it fits into the grand scheme of things on the World Wide Web. The easiest way to do this is to run searches for some of the competitive terms in the website’s niche. If you imagine the Internet as one giant city, you can picture domains as buildings. The first step I take before working on a client’s website is figuring out in which neighborhood its building (domain) resides. This search result page is similar to seeing a map of the given Internet neighborhood. You usually can quickly identify the neighborhood anchors (due to their link popularity) and specialists in the top 10 (due to their relevancy).

During client meetings, when I look at the search engine result page for a competitive term like advertising, I am not looking for websites to visit but rather trying to get a general idea of the maturity of the Internet neighborhood. I am very vocal when I am doing this and have been known to question out loud, “How did that website get there?”

A couple times, the client momentarily thought I was talking about his website and had a quick moment of panic. In reality, I am commenting on a spam site I see rising up the results.

Also, take note that regardless of whether or not you are logged into a Google account, the search engine will automatically customize your search results based on links you click most. This can be misleading because it will make your favorite websites rank higher for you than they do for the rest of the population.

Taking Advantage of Temporal Algorithms

You can use the temporal algorithms to your advantage. I accidentally did this once with great success. Wrote about why I didn’t enjoy watching “The Arrival”, just before the Oscars 2017. As a result of temporal algorithms my post ranked in the top 10 for the query “The Arrival” for a short period following the movie’s release and during the Oscar votes. Because of this high ranking, tens of thousands of people read my article. I thought it was because I was so awesome, but after digging into my analytics I realized it was because of unplanned use of the temporal algorithms. If you are a blogger, this tactic of quickly writing about news events can be a great traffic booster.

Action Checklist

When viewing a website from the 1,000-foot level, be sure to complete the following: Search for the broadest keyword that the given site might potentially rank Identify the maturity of the search engine results page (SERP) based on the criteria listed in this chapter Identify major competitors and record them in a list for later competitive analysis This section discussed analyzing websites at their highest level.

At this point, the details don’t matter. Rather it is macro patterns that are important. The following sections dive deeper into the website and figure out how everything is related. Remember, search engines use hundreds of metrics to rank websites. This is possible because the same website can be viewed many different ways.

The Secrets of Popularity * SEO

Once upon a time there were two nerds at Stanford working on their PhDs.

(Now that I think about it, there were probably a lot more than two nerds at Stanford.) Two of the nerds at Stanford were not satisfied with the current options for searching online, so they attempted to develop a better way.

Being long-time academics, they eventually decided to take the way academic papers were organized and apply that to webpages. A quick and fairly objective way to judge the quality of an academic paper is to see how many times other academic papers have cited it. This concept was easy to replicate online because the original purpose of the Internet was to share academic resources between universities.

The citations manifested themselves as hyperlinks once they went online. One of the nerds came up with an algorithm for calculating these values on a global scale, and they both lived happily ever after. Of course, these two nerds were Larry Page and Sergey Brin, the founders of Google, and the algorithm that Larry invented that day was what eventually became PageRank. Long story short, Google ended up becoming a big deal and now the two founders rent an airstrip from NASA so they have somewhere to land their private jets. (Think I am kidding? See

Relevance, Speed, and Scalability

Continue reading “The Secrets of Popularity * SEO”

Where does Netflix store the offline downloads?

Netflix announced a few months back that subscribers will be able to download select movies and TV shows for offline playback. The feature had been requested by users for a long time, and it’s reportedly been in the works since June. Now, anyone with a Netflix subscription can download movies and TV shows to watch when they’re not connected to the internet.

How to start downloading movies

You can only download Netflix videos using the iOS or Android app. Netflix requires users to have the iOS 8.0 or later and Android 4.4.2 or later, in addition to having the latest version of the app. Downloading videos will consume about as much data as streaming, so if you plan on saving a bunch of videos, we’d recommend connecting to a reliable WiFi connection to prevent any unexpected mobile data charges.

Where are they stored?


Where c: is your system drive and Username gets replaced with the user you are logged in with.

Once you navigate to the above directory, you will see all downloaded movies and TV shows. Netflix doesn’t use descriptive names for downloaded contents, so you cannot identify them. However, the size of the file might give you some clue. The biggest catch is that these contents cannot be opened with media players like VLC or GOM Player.

Black Mirror said it first: You aren’t anything if you aren’t online!

According to a Netflix spokesperson,

“The downloads can only be viewed within the Netflix mobile app; they aren’t like videos you download from the internet and store to your device.” It’s safe to say this is a digital rights management (DRM) scheme to protect the copyrights of videos being offered.

Please bear in mind that Netflix app will not recognize or play contents if you rename or change the files. So, don’t try to rename downloaded Netflix contents.

The whole reason this entire concept of the offline storage and playback took THIS long to implement is Netflix had to spend years finding out every possible way this kind of functionality could and more than likely would be exploited for people to steal the media content and then re-distribute it aka pirate it. The system they’ve created now that’s rolling out is pretty damned bulletproof from every research report I’ve read about it so far and they spent almost 8 months in a beta program asking people to hack the hell out of it and rip ’em off for that content and so far as I’m aware nobody was ever successful in their attempts and I’m pretty certain some very talented coders/developers and “hackers” went to work on that system with nothing positive for all their efforts.

Sure, it’s entirely possible someone might find a particular exploit that could potentially make it a snap or even a click or two to decrypt and break the DRM on the local content once it’s downloaded – we already know Netflix streams can be captured, so even with all the time and expense put into this new functionality it could eventually get itself cracked pretty fast, or never, that remains to be seen.