Business Processes in Practice: Make-to- Stock vs. Make-to-Order (Apple Case Study)

dell-vs-appleA good example of a company that uses the make-to- stock strategy is Apple Inc. Apple uses the make- to- stock process for Macs sold in its Apple stores. The company first estimates the consumer demand for its Mac computers. It then calculates its available  manufacturing capacity and the quantities of raw materials it will need to build enough computers to meet consumer demand. Apple’s strategy is to purchase raw materials and reserve manufacturing capacity ahead of time to maximize the cost efficiencies of buying materials in bulk quantities and doing large production runs. Apple and its contract manufacturers then produce a specific quantity of each Mac model and ship them from the factory to the Apple stores and other retail outlets for sale. When customers come into an Apple
store, they expect that the computer they want to buy will be there and that they can take it home immediately after purchasing it.

 

Because Apple uses a make-to-stock strategy, the company must pay extremely close attention to both its retail sales and the amount of finished goods inventory it has in stock in order to estimate its demand as accurately as possible. If Apple overestimates the demand for a particular product, the company will be stuck with a large inventory of very expensive finished goods that customers
don’t want to buy and that will decrease in value while they sit on the shelf. Conversely, if it underestimates the demand for a product, customers who want to purchase the computer will be told it is out of stock. They will then have two options: place a back order and wait until the store gets resupplied with inventory, or shop for the product at a different store. Either outcome will make consumers unhappy and could result in lost sales.

In contrast, one of Apple’s major competitors— Dell—employs a make-to-order production strategy. Dell was the first company in the industry to build computers only after they had received a fi rm order and thus knew exactly what product the customer wanted. Because Dell does not have many retail outlets like Apple (although it has recently tested some retail partnerships), the company
relies primarily on telephone and Internet sales channels for the majority of their sales. In contrast to Apple customers, then, when Dell customers place an order, they anticipate that they will have to wait a few days for the computer to be produced and delivered.
After the customer places an order, Dell typically assembles the computer from raw materials it has on hand and then ships it directly to the customer.
Unlike Apple, then, Dell does not need to be very concerned with estimating demand for its finished products because it knows exactly what customers want based on customer orders. However, Dell must be extremely careful in purchasing raw materials and managing its production capacity. Because its production runs are very small—sometimes one computer at a time—it must estimate its raw material needs and production scheduling based on an unknown customer demand.
If Dell mismanages its production planning process, it is especially susceptible to an oversupply or undersupply of raw materials and shortages or idleness in production capacity. If Dell does not have sufficient raw materials or production capacity, customers will have to wait much longer for their computers to be shipped.
Conversely, if the company has excessive raw materials or unused production capacity, it loses money.
Although Dell’s customers are accustomed to waiting a few days for their computers to arrive, they probably will be upset if their deliveries are delayed for several weeks due to a shortage of raw materials or a backlog of production orders. Alternatively, Dell’s profitability will suffer if its production lines are idle or its warehouses are filled with unused raw materials.
Both Apple and Dell have chosen a production strategy that maximizes their profitability. Apple believes that by controlling the entire buying experience through their Internet and physical stores, they can attract more customers. This strategic objective drives Apple to place a much higher emphasis on having products available in the store when a customer comes there to shop, which increases the likelihood that she or he will make a purchase. In addition, Apple realizes significant cost savings through large, planned production runs and close coordination with retail sales data generated by their online and physical stores. For all these reasons, the make-to-stock production process is probably the best strategy for both Apple and its customers.
In the case of Dell, the make-to-order production process fits well with the company’s rapid assembly and standardized products. Dell’s customers are comfortable ordering a computer that they have never seen because they know that Dell uses high-quality, industry-standard components. They also trust Dell to ship them a finished computer in just a few days, and they are willing to wait
for it to arrive rather than pick it up in a store.

In essence, the preferences and behaviors of each company’s customers determine, to a great extent, the production process for each company. Apple’s customers want to touch and experience the product in a retail store, whereas Dell’s customers are content to
buy something over the phone or the Internet. Each company has optimized its production process to match both its specific set of customer requirements and its internal profitability goals and cost structure.

Source: Adapted from Magal and Word Essentials of Business Processes and Information Systems. John Wiley & Sons, Inc. (2009).

Experts Warned of Cloud Complexity

One of the Yale researchers has warned that cloud-based systems might melt down with the systems becoming more and more complex.

Bryan Ford has written a paper, which he is going to present to the USENIX HotCloud 2012 conference soon. The paper says that with the use of cloud computing now becoming more mainstream, major operational “meltdowns” might arise. The matter is that everything will get quite complex, and complexity will cause an accident.

Ford explained that as diverse cloud services share more fluidly and aggressively multiplexed hardware resource pools, the probability arises that unexpected things will happen, including unpredictable interactions between load-balancing and other reactive mechanisms. This may result in dynamic instabilities, also known as “meltdowns”.

According to the experts report, it was a little like the intertwining, complex relationships and structures which could promote global financial crisis. He pointed out that new cloud services may emerge, which actually resell, trade, or speculate on complex “’derivatives” like financial trading industries.

Such components will be maintained and deployed by different companies, which, due competition, won’t share details (if possible) about the internal operation of its services. As a result, the cloud industry might face speculative bubbles. The experts predict occasional large-scale failures due to composite cloud services which have weaknesses that do not reveal until those bubbles burst.

Meanwhile, there’s no solution to the problem. The only advice that the experts can give is that providers should release detailed data about their system dependencies to some special 3rd party that offers cloud reliability analysis services.

Securing Web Services With Username and Password (Custom SoapExtension)

Have you ever wanted to intercept a web service method because you wanted to maybe log it or even as we are about to do authenticate a user?  Did you know you can intercept any incoming SoapMessage that is sent to your web service?  This is all possible because of the SoapExtension class in .Net. Not only can you intercept the incoming message but you can do it within one of four stages:

  1. AfterDeserialize
  2. AfterSerialize
  3. BeforeDeserialize
  4. BeforeSerialize

This allows us a lot of flexibility obviously.  In our example we are going to work with the AfterDeserialize stage which is after the message has been sent across the wire and serialized into a SoapMessage object.  Since we will have a full blown SoapMessage object, we can inspect the headers of the SoapMessage and take care of authentication then.  Our end goal with taking this approach is to allow us to authenticate a user with a WebMethod simply by adding an authentication attribute to the WebMethod like this highlighted in bold.

    [WebMethod]
[SoapHeader(“CustomSoapHeader”)]
[AuthenticatonSoapExtensionAttribute] (magic here)

public int AddTwoNumbers(int x, int y)
{
return x + y;
}

You’ll notice in this example we do not have to remember to call the code to authenticate the user manually as we saw in the previous article.  Instead by adding the attribute to the method it knows to call the authentication method.  To do this we need to first create a custom object that extends SoapExtensionAttribute and then another that extends SoapExtension.

Create a Custom SoapExtensionAttribute

In order to have the method call our custom SoapExtension we need to create an object that extends SoapExtensionAttribute.  It is a farily simple class with two overridden properties.  Here’s the code:

 [AttributeUsage(AttributeTargets.Method)]
public class AuthenticatonSoapExtensionAttribute : SoapExtensionAttribute
{
private int _priority;
public override int Priority
{
get { return _priority; }
set { _priority = value; }
}

public override Type ExtensionType
{
get { return typeof(AuthenticatonSoapExtension); }
}
}

You’ll notice about the only thing of real substance is the Extension type property which simply returns to us our custom extension.

Create a Custom SoapExtension

The last piece to pull this all together is a custom class which extends the SoapExtension class.  In this class we are going to write the code that does the actual authentication.  We are going to check for the AfterDeserialize stage and then first make sure we have a valid SoapHeader.   Once we do that we are going to call the static validation method and pass in the SoapHeader as we did above.

 /// <summary>
/// Custom SoapExtension that authenticates the method being called.
/// </summary>

public class AuthenticatonSoapExtension : SoapExtension
{
/// <summary>
/// When overridden in a derived class, allows a SOAP extension to initialize data specific to an XML Web service method using an attribute applied to the XML Web service method at a one time performance cost.
/// </summary>
/// <param name=”methodInfo”></param>
/// <param name=”attrib”></param>

public override object GetInitializer(LogicalMethodInfo methodInfo, SoapExtensionAttribute attrib)
{
return null;
}
/// <summary>
/// When overridden in a derived class, allows a SOAP extension to initialize data specific to a class implementing an XML Web service at a one time performance cost.
/// </summary>
/// <param name=”WebServiceType”></param>

public override object GetInitializer(Type WebServiceType)
{
return null;
}

/// <summary>
/// When overridden in a derived class, allows a SOAP extension to initialize itself using the data cached in the GetInitializer method.
/// </summary>
/// <param name=”initializer”></param>

public override void Initialize(object initializer)
{
}

/// <summary>
/// After the message is deserialized we authenticate it.
/// </summary>
/// <param name=”message”></param>
public override void ProcessMessage(SoapMessage message)
{
if (message.Stage == SoapMessageStage.AfterDeserialize)
{
Authenticate(message);
}
}

public void Authenticate(SoapMessage message)
{
ServiceAuthHeader header = message.Headers[0] as ServiceAuthHeader;
if (header != null)
{
ServiceAuthHeaderValidation.Validate(header);
}
else
{
throw new ArgumentNullException(“No ServiceAuthHeader was specified in SoapMessage.”);
}
}

}

The method that we are really concerned with is the ProcessMessage which checks for the stage and then calls the Authenticate method.  This in turn calls our static validation method which checks for authentication.  At this point light bulbs should be going off!   Since we have a SoapMessage object do we not know which method is being called?  Yes!  Could we modify the ServiceAuthHeaderValidation to check for a database instead of hard coding things?  Yes!  Now you are starting to see where this could really go.   SoapExtensions are powerful and only limited to your imagination.

When I Test It, It Doesn’t Work, Why?

Once you get your SoapExtension in your solution setup and press F5 to debug it within Visual Studio  it will launch a new web server on a random port and bring you to your service.  You enter the parameters and submit the form and it by passes your validation.  Why!?

This is suppose to happen and here is why. If you go to the service invoked from VS through the browser interface it will not invoke the authentication and it isn’t suppose to either. The reason is you are not invoking the service via SOAP but rather just a standard POST from a form. Therefore, the SOAP extension is never going to fire. This should be disabled when you publish your web service to production as only SOAP messages should be allowed. If you have a case where you need to allow GET and POST calls then the method of a custom SoapExtension isn’t going to work.

As a benefit, Visual Studio builds the form for you automatically when you press F5 and allows you to pass parameters to the web method, but it does it via POST. If you invoke the web method from a console application or a real client making a SOAP call, you have to pass in the username and password.  I actually consider this behavior a feature.  If we didn’t use the SoapExtension to secure the method, we’d be forced to pass in username and password all the time which would mean we’d have to always call the secured web method from a test client.   Speaking from experience this isn’t fun.  Of course you should have Unit Tests for each web method anyway but it is really easy to pass in the params to a web form while debugging.

I hope you find this useful and now don’t feel so daunted because your team leader asked you how you were going to authenticate web service methods via a database.  The only thing left is for you to implement your required features.  Of course, if you are on an Intranet, instead of using username and password as we did in the previous post you could at the point of Authenticate(SoapMessage message) use the user’s integrated credentials and check for various groups in Active Directory or even using Enterprise Library Security Block.