Market & Policy  |   Project & Contract  |   Technology & Product  |   Corporate News  |   Product News  |
  Cell & Module  |   Production & Inspection  |   Component & Power  |   Solar Material  |
  Worldwide  |   Europe  |   North America  |   APAC  |   Others  |
  Cell & Module  |   Production & Inspection  |   Component & Power  |   Solar Material  |
  Cell & Module  |   Production & Inspection  |   Component & Power  |   Solar Material  |   Agent & Dealer  |
  Free Event Listing
  2012 JUN Issue   |   What is Digital Magazine?  |  How to use  |  Archives  |  Subscription  |  iPad / Mobile  

Meeting the Needs of Advanced Photovoltaic Manufacturing

The world of Photovoltaics (PV) manufacturing is a diverse one. There are new tools, processes and materials being introduced on a constant basis. Many have noted that grid parity is the major goal for the PV industry. Grid parity is a poorly-defined term, but inherent in the term is an acknowledgement that alternative approaches exist at lower costs. The implication is clear, improving the cost-effectiveness of PV is essential.

By Alan Levine



The total cost of a Photovoltaics (PV) system involves not just the manufacturing of wafers, cells and modules, but also the installation in the field. These Balance of System (BoS) costs are a substantial cost of the finished power plant. Many of these BoS costs scale with the area of panels to be deployed. This suggests higher efficiencies have a distinct advantage in the potential to lower overall costs.

A simple example is to assume the BoS costs are half the project costs and the panels are the other half of the projects costs. A 10% increase in the panel costs could lead to a 10% increase in the power provided, but there would only be a 5% increase in the overall project cost. While this is overly simplistic (items like inverter costs are scaled to the power provided), the example highlights a key aspect of solar operations: use of higher efficiency cells provide considerable leverage for improving margins. There is sufficient value with these improved margins to accommodate added costs in the supply chain; notably, the production of cells and modules.

Added process complexity and use of higher end materials has the potential to pay off across the entire value chain--providing the cell performance improves sufficiently. Done right, the net results are improving profits while concurrently lowering the cost/watt. This process complexity has costs associated with it. Often times, there is more equipment required; sometimes the equipment is more costly. New process steps may be needed and that can include new materials or different approaches using the same materials.

What is clear: solar wafer, cell, and module fabrication lines are constantly evolving and that their complexity is increasing. This leads to a challenge, how to operate these complex and constantly evolving factories in the most efficient manner possible. Fortunately, these factories can leverage tools that aid in the operational decisions. These tools, developed and proven in related industries (integrated circuits, flat panel displays, magnetic heads, etc.) can tie innovation and operations to value. The tools that connect technology to value are known as operational models.

Operational modeling is based on one simple principle: every decision, even a decision that appears technical, is a business decision. Operational models are decision tools.

In practice, operational models are software-based tools. They are typically targeted to specific types of analyses, such as a factory, a business, or a process. In the simplest form, a user will look at a set of approaches and determine the impact of these approaches against appropriate business metrics. Since these models contain both the output value and the input resources, it becomes straightforward to get business metrics such as cost reduction, cash flow, and cycle time or payback period.

The tools that were developed to analyze operations in our sister industries continue to be enabling technologies for those organizations. Among these critical operational models are cost and resource modeling, discrete-event simulation, and Cost Of Ownership (COO). Given the immense paybacks of these tools, it is essential that the solar industry bring these into use. Fortunately, many of these tools evolved from core manufacturing concepts. This has enabled them to be easily adapted to developing industries, including solar, MEMS, nano-devices, energy storage, etc.

Most of the key underlying standards developed by Semiconductor Equipment and Materials International (SEMI) are easily incorporated into the solar world. The methods that allow information to be analyzed are common to semiconductors, display, magnetic heads, crystal growing, solar cells, solar modules, and thin-film panels. These are solved problems.

In a puzzling development, the solar industry has been slow to incorporate these tools. Puzzling, because most of our sister industries do not have the sort of highly differentiated competition that solar energy has. The next great computer chip will most likely be an extension of an existing computer chip. The next great display will be an extension of current display technology. But the next breakthrough in generating electrical power can come from wind, water, coal, nuclear, etc. Solar producers have enormous incentive to optimize their operations.

Operational modeling enables companies to look at scenarios as large as multiple factories running multiple products in multiple locations around the world and as focused as specific changes to single process steps.

Let’s start with a few cautionary notes about common methods employed currently. Intuition does not work well when more than a few variables are in play. Most analyses are laden with subtleties. Single product factories quickly become multi-product factories with embedded development lines. Companies are constantly ramping products and their production rates up and down. Quick and dirty spreadsheets used for simplified ‘greenfield’ factories have proven woefully inadequate in real-world situations.

Before going into examples, it is worth taking a few moments to acknowledge some practical realities.

Very few factories start up and continue unchanged over extended periods. The real world is full of changes, some small and subtle, others large and dramatic. The nature of the real-world requires operational models to go well beyond spreadsheets. Most operations are initially laid out as single product factories running a single, stable process. In practice, these operations quickly evolve into multi-product and multi-process factories, often with development incorporated into the line. The business metrics for decisions can vary, one company might see cash flow as the key and another might use Internal Rate of Return (IRR). But this much is clear, simple ‘greenfield’ situations are the exception, not the norm.

It is also important to understand the role of suppliers. Some offer a cost analysis to prospective clients, but typically show only what helps them the most. Manufacturers often struggle as well. One client told us that every person that did cost analysis in their operation had their own unique way of doing it.

Still, cost is only a part of the drive to lower cost per watt. Value is an integral part of the equation. Even with the rising complexity in solar operations, it remains rare for PV suppliers and manufacturers to have the ability to analyze cost and value concurrently.

Operational changes can range from simple to complex. A new material might change little other than the cost of the material at that step. On the other hand, it could impact the tool, other materials, other process steps, the capacity of the factory, the reliability of the unit and the value of the finished product

In some cases, it is necessary to model a wide range of items. It is often practical to simplify a model down to the actual areas where it has an impact. Global optimization is ideal, but often overly complex when looking at decisions.

When a potential change is identified, the next step is to determine the areas that are affected by the change. Many poor choices are made because assumptions are too restrictive. People tend to want to limit an analysis to areas where it is easy-to-collect informationand avoid areas where information is difficult or time consuming to gather or complex to process.

Here is where management must take charge. It is straightforward to resource operational models. If management allocates adequate time and resources, and ultimately uses these results as part of their decision processes, operational models will work extremely well and provide very large paybacks.

Perhaps the biggest challenge is one that is not obvious. Input information cuts across functional lines within the organization, meaning management must encourage ease of access. For example, technical staff would need certain financial data, a process person might need maintenance information, and a factory planner might need productivity rates of the current tool set. With some simple approaches, management can make it easy for users to get all the necessary information.

The nature of the analysis will determine the type of operational model that is required. I’ll cover 3 situations, describing the question, the approach, and the use of the operational model in the decision process.

Perhaps the broadest questions involve factory physicsa term used to describe how the entire operation behaves. Complex processes, queues, and reliability can have substantial impacts on the productivity of a factory. What makes it even more challenging is that processes, queues, and reliability interact with each other. The major challenge is the optimization of expensive resources. A well-designed operation will usually have the most expensive capital toolset be the bottleneck. Thus, the key is to insure the bottleneck is fed with production units 100% of the time it is available. Any idle time lost at a bottleneck translates to lost capacity for the entire factory. This is what differentiates a bottleneck process or tool from other processes.

Bottleneck analysis is inherently complex. Changes to processes and product mix have a way of changing the bottleneck location within a factory. The value of understanding these changes ahead of time can significantly improve the financial performance of a business.

Accurately addressing bottleneck management requires a dynamic technique called discrete-event simulation. This approach allows companies to see the interactive effects for a wide array of factors. Often, the challenge in managing a bottleneck resource is driven by an understanding of the variability in availability of prior steps.

Even when resource availability is highly predictable, it can be a challenge to optimize the factory. For example, Preventative Maintenance (PM) is highly predictable. Yet, optimizing PMs, especially between linked systems, can be difficult. One step may require a cleaning every 10,000 units, another may require a recalibration every 24 hours, and a third may need a conditioning process with every 300μm of deposition.

Far more complex is the variability in reliability. Does a specific process stop many times, but only for a few moments? Or does it go down infrequently, but require many hours, even multiple shifts, to make the needed repairs? Both of these scenarios could have the same average reliability, but would have very different impacts on the operation, not just on the process step.

Another layer of complexity is added when looking at the availability of the resources needed to fix down processes. This includes people, parts, and qualification test units. People have a big impact on downtime characteristics. Cross training of operator and maintenance staff can be a powerful way to improve factory productivity. The number of items that impact productivity is enormous. Discrete-event simulation is the best way to look at a large variety of effects in a very short time frame. Things like intelligent cross training can achieve capacity increases of 10%-20%, even in seemingly well-run operations (see Figure 1).



Factory Explorer®1) and its discrete-event simulation engine create a detailed understanding of the constraints on the factory. In Figure 1, 5 resource sets are analyzed for potential bottleneck situations, 3 tools sets (ION, AME and PE) and 2 operator sets (ION_Ops and PE_Ops). The colors represent the percent of time a resource is in a given state (i.e. PM, unscheduled downtime, set-up time, processing, etc.). Analyzing bottlenecks is necessary to determine the highest payback when allocating resources.

A different type of question revolves around the financial performance of an operation. This is best handled by a deterministic model know as a cost & resource analysis and operates using an accounting principle known as activity-based costing. This method can answer many potential questions, including:

-Will the enterprise make a profit?

-How many people are needed?

-What is the cash flow?

-When to ramp one product down and another up?

-Where to build a new product?

-Should the organization build or outsource?

-Operational modeling provides objectivity, which is essential to a good business decision.

Projecting total product costs is critical to driving high ROIs. In Figure 2, output from Factory Commander®1) shows the evolution from a single product factory in the first year to a multi-product factory in subsequent years, with products ramping up and down. The change in total costs reflects the changing costs and volume for each individual product.

A third form of analysis represents a focused use of operational modeling, where the issues are essentially self-contained within a specific process step. This can use a technique known as Cost Of Ownership (COO). To look at sequences that are self-contained, a different form of COO can be used. The COO example in Figure 3 is for a single step. It shows how a sophisticated approach, incorporating continuous improvement, makes a major difference in determining the costs of the operation.



Using the continuous improvement engine in TWO COOL®1) allows COO to be examined in a realistic manner. In this example, inflation drives labor and material costs up, while improvements in raw throughput, reliability and test times drive costs down. Continuous improvement provides a COO benefit of about 5% in year 2 and increases to about 14% beginning in year 4.

When people speak of grid parity, they are inherently involving time and money, which is why decisions should be linked to time and money. Without this approach, precious resources will be wasted. Reducing cost per watt efficiently will require companies in the PV industry to form a bridge between technology and business. Operational models are that bridge, a proven solution for driving factory improvements. These improvements are major contributors towards opening up vast new markets and realizing the promise of grid parity.


Alan Levine has spent 30 years working in high technology manufacturing, with an emphasis on manufacturing productivity. Levine has been with Wright Williams & Kelly, Inc. (www.wwk.com) since 1995 and focuses on helping clients increase the value they receive from their complex operations. Previously, he held positions with Fairchild Semiconductor, KLA Instruments, and Ultratech Stepper. He holds a degree in Chemical Engineering from Cornell.



1) TWO COOL®, Factory Commander®, and Factory Explorer® are commercial software packages from Wright Williams & Kelly, Inc.



For more information, please send your e-mails to pved@infothe.com.

2011 www.interpv.net All rights reserved. 


     Harsh Processes Require Hard Pumps

     Production Cutbacks Insufficient to Prevent Solar Module Inventory Buildup

Home l New Product Showcase l Gold Suppliers l Trade Shows l email Newsletter l About InterPV l Help l Site Map l Partnerships l Privacy Policy
Publisher: Choi Jung-sik | Edited by: Lee Sang-yul | Youth Protection Officer: Lee Sang-yul
Copyright Notice ⓒ 2004-2007 www.interpv.net Corporation and its licensors. All rights reserved.