sand making machine imagine

2 types of concrete crushers | hxjq

2 types of concrete crushers | hxjq

The rapid development of urbanization has resulted in the accumulation of a large amount of waste concrete, which not only occupies land resources but also pollutes the air and the environment. Therefore, the recycling of waste concrete has become an important issue that the government needs to solve.

Abandoned concrete blocks are high-quality concrete aggregates which have many advantages. For example, after the buildings are dismantled, the high-quality concrete blocks and silt after crushing and screening can be used as recycled coarse and fine aggregates for concrete. The fine powder can be directly used as the raw material of cement. The concrete prepared from recycled cement and recycled aggregate can enter the next cycle, which realizes zero waste discharge throughout the whole cycle.

Concrete, cement and other wastes in construction waste can be used as building aggregates and recycled brick raw materials after being reasonably crushed, screened and crushed. And the main equipment used for crushing concrete can be divided into two types: traditional fixed crusher and mobile concrete crusher, among which small crushing equipment is favored by users.

Although the compressive strength and hardness of concrete are not high, due to the porosity and the formation type, the concrete has good toughness and can buffer the crushing force, which causes low crushing efficiency. So, what kind of crusher should be selected for concrete crushing? In the process of crushing waste concrete, according to the working principle of more crushing and less grinding, it is necessary to carefully configure the concrete crusher equipment.

Jaw Crusher, also known as concrete crusher, is usually used as the primary equipment for concrete crushing. It is also suitable for metallurgy, mining, construction, chemical, water conservancy and railway sectors, and used as a device for fine and medium crushing of ores and rocks with compressive strength below 250 Mpa.

In recent years, the small jaw crusher has been favored by foreign users because of its small size, easy transportation and installation, low price, and fast profit. The models like PE-150250, PE-200350 and PE-400600 have become the best choice for customers to crush concrete.

After the rough breaking, steel and iron equipment are added to remove the steel bars and iron blocks in the waste concrete, which will eliminate the damage of steel bars and iron blocks to the equipment without affecting the production. Generally, the impact crusher, the fine crushing jaw crusher or the cone crusher is used as the secondary crushing to crush the material to less than 2 cm, and the selected granularity can be basically achieved.

For smaller discharge sizes, a three-stage crusher can be used, for example, the fine crushing crusher or the roller crusher is used to further crush the ore to less than 10 mm. In the actual production, the suitable crusher can be selected according to the size of the concrete block. It can be combined in single or multi-machine operations, both of which have the characteristics of simple operation, strong controllability and high production efficiency.

In the international environment of the crusher industry, besides the traditional jaw crusher, high-efficiency and environmentally-friendly construction concrete crusher will be the trend of future development.

In view of the characteristics of concrete waste, Henan HXJQ Machinery designed a concrete crushing equipment-mobile concrete crusher. The waste concrete after crushing can be used for reinforcing the foundation, producing bricks, cement, etc, not only achieving its values but also solving the issue of land and environment problems, which can be described as two-fold.

The mobile concrete processing station produced by HXJQ Machinery adopts multi-stage combination mode, which includes jaw crusher, impact crusher, cone crusher and vibrating screening equipment, conveyor belt, etc. Generally, the concrete crushing station is composed of a concrete crusher (sand making machine), a screening machine, a feeder, a conveyor belt, a steel frame, a drive system, an electric control system, a motor unit and the like.

The concrete material is sent into the crusher by the feeding equipment, and the crushing machine converts the large concrete into gravel. The finished product which meets the standard is transported by the conveyor belt to the stacking place, and the products which don't meet the standard will be transported by the other conveying belt to the crusher again until it is qualified.

The integrated vibrating screen, feeder and the under-belt conveyor, the vibrating screen and the crusher integrated into the vehicle can reach any position on the working site under any terrain conditions. Thus the mobile concrete crusher has many advantages like reasonable material matching, smooth flow, reliable operation, convenient operation, high efficiency and energy saving.

1. According to the driving way, it is divided into tire type and crawler type: the tire type concrete crushing and sorting machine needs semi-trailer traction to run, while the crawler type can be remotely operated with buttons. Relatively speaking, the latter is more intelligent and the price is more expensive.

2. According to the function, it is divided into crushing type and sand making type: the concrete crushing and screening machine includes a combination of crushing equipment such as jaw crusher, cone crusher and impact crusher. The sand making type is mainly equipped with sand making machine and hammer sanding machine.

The mobile crushing station can prevent and control environmental pollution, improve the ecological environment, and protect natural resources. The size and model can be designed according to the different production needs of users. According to the statistics of the HXJQ machinery, the small mobile crusher is chosen by more foreign users because of its reasonable price, high quality, convenient transition, operation and maintenance.

A project introduction of construction concrete treatment: in October 2018, a customer found HXJQ, and hoped that we could provide him with the complete equipment for breaking construction waste. Our technical manager quickly contacted him and learned that the customer had a large amount of construction waste to be disposed of.

From the perspective of economic foundation and practical operation, the technical manager recommended the fixed crushing station to him and designed a complete set of equipment suitable for his actual needs. In the end, the customer introduced the PE-400600 jaw crusher and PF-1010 impact crusher produced by our company to break the concrete waste. The finished sandstone is used for brick making, roadbed materials, etc., and the separated steel is recycled.

The pretreated concrete with reinforcing steel is sent to the jaw crusher for initial breakage by the conveyor belt, then effectively separated by the iron remover, and sent to the impact crusher for fine crushing. The crushed material is sieved by the vibrating screen. The finished material is output by the conveyor. If the material does not meet the specifications, it will continue to return to the impact crusher and break again.

The development and utilization of waste concrete as a recycled material solves the problems of a large amount of waste concrete treatment and the resulting deterioration of the ecological environment; on the other hand, it can reduce the consumption of natural aggregates in the construction industry, thereby reducing exploitation of the natural sand and gravel, which has fundamentally solved the problem of the depletion of natural aggregates and the destruction of the ecological environment because of the lack of sandstones.

Under this circumstance, the crusher plays an irreplaceable role in the recycling of materials. Whether it is the traditional fixed crusher or the latest mobile crusher, both of them have their own advantages. As long as the size of the stone produced by the equipment can meet the standard, it is a good crusher.

leading crusher manufacturer in china | hxjq

leading crusher manufacturer in china | hxjq

HXJQ is a large-scale mining machinery Co., Ltd which is dedicated to producing and selling jaw crusher, cone crusher, mobile crushing station and other mining machines, which has the features of high performance and reasonable price.

HXJQ is the official brand of Henan Hongxing Mining Machinery Co., which has been well-known in the world. Devoting to the research and manufacture of material crushing and processing equipment, we supply customized solutions to mining, aggregate, and other industries to help customers to achieve their goals of getting profits and being successful.

How does East Kalimantan appeal to the Indonesian government in moving the capital? What new opportunities will the capital relocation bring? Let's see the details.

For more than 40 years, Hongxing Mining Machinery has provided iron ore processing equipment to improve the working efficiency of mineral processing plants.

interpretable machine learning with python | packt

interpretable machine learning with python | packt

Do you want to understand your models and mitigate risks associated with poor predictions using machine learning (ML) interpretation? Interpretable Machine Learning with Python can help you work effectively with ML models.

The first section of the book is a beginner's guide to interpretability, covering its relevance in business and exploring its key aspects and challenges. You'll focus on how white-box models work, compare them to black-box and glass-box models, and examine their trade-off. The second section will get you up to speed with a vast array of interpretation methods, also known as Explainable AI (XAI) methods, and how to apply them to different use cases, be it for classification or regression, for tabular, time-series, image or text. In addition to the step-by-step code, the book also helps the reader to interpret model outcomes using examples. In the third section, youll get hands-on with tuning models and training data for interpretability by reducing complexity, mitigating bias, placing guardrails, and enhancing reliability. The methods youll explore here range from state-of-the-art feature selection and dataset debiasing methods to monotonic constraints and adversarial retraining.

For instance, there are rules as to who gets approved for credit or released on bail, and which social media posts might get censored. There are also procedures to determine which marketing tactics are most effective and which chest x-ray features might diagnose a positive case of pneumonia.

But not so long ago, rules and procedures such as these used to be hardcoded into software, textbooks, and paper forms, and humans were the ultimate decision-makers. Often, it was entirely up to human discretion. Decisions depended on human discretion because rules and procedures were rigid and, therefore, not always applicable. There were always exceptions, so a human was needed to make them.

For example, if you would ask for a mortgage, your approval depended on an acceptable and reasonably lengthy credit history. This data, in turn, would produce a credit score using a scoring algorithm. Then, the bank had rules that determined what score was good enough for the mortgage you wanted. Your loan officer could follow it or override it.

These days, financial institutions train models on thousands of mortgage outcomes, with dozens of variables. These models can be used to determine the likelihood that you would default on a mortgage with a presumed high accuracy. If there is a loan officer to stamp the approval or denial, it's no longer merely a guideline but an algorithmic decision. How could it be wrong? How could it be right?

To interpret decisions made by a machine learning model is to find meaning in it, but furthermore, you can trace it back to its source and the process that transformed it. This chapter introduces machine learning interpretation and related concepts such as interpretability, explainability, black-box models, and transparency. This chapter provides definitions for these terms to avoid ambiguity and underpins the value of machine learning interpretability. These are the main topics we are going to cover:

To follow the example in this chapter, you will need Python 3, either running in a Jupyter environment or in your favorite integrated development environment (IDE) such as PyCharm, Atom, VSCode, PyDev, or Idle. The example also requires the requests, bs4, pandas, sklearn , matplotlib, and scipy Python libraries. The code for this chapter is located here:

To interpret something is to explain the meaning of it. In the context of machine learning, that something is an algorithm. More specifically, that algorithm is a mathematical one that takes input data and produces an output, much like with any formula.

Once fitted to the data, the meaning of this model is that predictions are a weighted sum of the features with the coefficients. In this case, there's only one feature or predictor variable, and the variable is typically called the response or target variable. A simple linear regression formula single-handedly explains the transformation, which is performed on the input data to produce the output . The following example can illustrate this concept in further detail.

If you go to this web page maintained by the University of California,, you can find a link to download a dataset of synthetic records of weights and heights of -year-olds. We won't use the entire dataset but only the sample table on the web page itself with records. We scrape the table from the web page and fit a linear regression model to the data. The model uses the height to predict the weight.

And voil! We now have a dataframe with Heights(Inches) in one column and Weights(Pounds) in another. As a sanity check, we can then count the number of records. This should be . The code is shown here:

Now that we have confirmed that we have the data, we must transform it so that it conforms to the model's specifications. sklearn needs it as NumPy arrays with dimensions, so we must first extract the Height(Inches) and Weight(Pounds) pandas Series. Then, we turn them into NumPy arrays, and, finally, reshape them into dimensions. The following commands perform all the necessary transformation operations:

However, explaining how the model works is only one way to explain this linear regression model, and this is only one side of the story. The model isn't perfect because the actual outcomes and the predicted outcomes are not the same for the training data. The difference between both is the error or residuals.

There are many ways of understanding an error in a model. You can use an error function such as mean_absolute_error to measure the deviation between the predicted values and the actual values, as illustrated in the following code snippet:

A mean absolute error means that, on average, the prediction is pounds from the actual amount, but this might not be intuitive or informative. Visualizing the linear regression model can shed some light on how accurate these predictions truly are.

As you can appreciate from the plot in Figure 1.1, there are many times in which the actuals are pounds away from the prediction. Yet the mean absolute error can fool you into thinking that the error is always closer to . This is why it is essential to visualize the error of the model to understand its distribution. Judging from this graph, we can tell that there are no red flags that stand out about this distribution, such as residuals being more spread out for one range of heights than for others. Since it is more or less equally spread out, we say it's homoscedastic. In the case of linear regression, this is one of many model assumptions you should test for, along with linearity, normality, independence, and lack of multicollinearity (if there's more than one feature). These assumptions ensure that you are using the right model for the job. In other words, the height and weight can be explained with a linear relationship, and it is a good idea to do so, statistically speaking.

With this model, we are trying to establish a linear relationship between height and weight. This association is called a linear correlation. One way to measure this relationship's strength is with Pearson's correlation coefficient. This statistical method measures the association between two variables using their covariance divided by their standard deviations. It is a number between and whereby the closer the number it is to zero, the weaker the association is. If the number is positive, there is a positive association, and if it's negative, there is a negative one. In Python, you can compute Pearson's correlation coefficient with the pearsonr function from scipy, as illustrated here:

The number is positive, which is no surprise because as height increases, weight also tends to increase, but it is also closer to than to , denoting that it is strongly correlated. The second number produced by the pearsonr function is the -value for testing non-correlation. If we test that it's less than an error level of 5%, we can say there's sufficient evidence of this correlation, as illustrated here:

Understanding how a model performs and in which circumstances can help us explain why it makes certain predictions, and when it cannot. Let's imagine we are asked to explain why someone who is 71 inches tall was predicted to have a weight of 134 pounds but instead weighed 18 pounds more. Judging from what we know about the model, this margin of error is not unusual even though it's not ideal. However, there are many circumstances in which we cannot expect this model to be reliable. What if we were asked to predict the weight of a person who is 56 inches tall with the help of this model? Could we assure the same level of accuracy? Definitely not, because we fit the model on the data of subjects no shorter than 63 inches. Ditto if we were asked to predict the weight of a 9-year-old, because the training data was for 18-year-olds.

Despite the acceptable results, this weight prediction model was not a realistic example. If you wanted to be more accurate butmore importantlyfaithful to what can really impact the weight of an individual, you would need to add more variables. You can addsaygender, age, diet, and activity level. This is where it gets interesting because you have to make sure it is fair to include them, or not to include them. For instance, if gender were included yet most of our dataset was composed of males, how could you ensure accuracy for females? This is what is called selection bias. And what if weight had more to do with lifestyle choices and circumstances such as poverty and pregnancy than gender? If these variables aren't included, this is called omitted variable bias. And then, does it make sense to include the sensitive gender variable at the risk of adding bias to the model?

Once you have multiple features that you have vetted for fairness, you can find out and explain which features impact model performance. We call this feature importance. However, as we add more variables, we increase the complexity of the model. Paradoxically, this is a problem for interpretation, and we will explore this in further detail in the following chapters. For now, the key takeaway should be that model interpretation has a lot to do with explaining the following:

The three main concepts of interpretable machine learning directly relate to the three preceding questions and have the acronym of FAT, which stands for fairness, accountability, and transparency. If you can explain that predictions were made without discernible bias, then there is fairness. If you can explain why it makes certain predictions, then there's accountability. And if you can explain how predictions were made and how the model works, then there's transparency. There are many ethical concerns associated to these concepts, as shown here in Figure 1.2:

Some researchers and companies have expanded FAT under a larger umbrella of ethical artificial intelligence (AI), thus turning FAT into FATE. Ethical AI is part of an even larger discussion of algorithmic and data governance. However, both concepts very much overlap since interpretable machine learning is how FAT principles and ethical concerns get implemented in machine learning. In this book, we will discuss ethics in this context. For instance, Chapter 13, Adversarial Robustness relates to reliability, safety, and security. Chapter 11, Mitigating Bias and Causal Inference Methods relates to fairness. That being said, interpretable machine learning can be leveraged with no ethical aim in mind, and also for unethical reasons.

Something you've probably noticed when reading the first few pages of this book is that the verbs interpret and explain, as well as the nouns interpretation and explanation, have been used interchangeably. This is not surprising, considering that to interpret is to explain the meaning of something. Despite that, the related terms interpretability and explainability should not be used interchangeably, even though they are often mistaken for synonyms.

Interpretability is the extent to which humans, including non-subject-matter experts, can understand the cause and effect, and input and output, of a machine learning model. To say a model has a high level of interpretability means you can describe in a human-interpretable way its inference. In other words, why does an input to a model produce a specific output? What are the requirements and constraints of the input data? What are the confidence bounds of the predictions? Or, why does one variable have a more substantial effect than another? For interpretability, detailing how a model works is only relevant to the extent that it can explain its predictions and justify that it's the right model for the use case.

In this chapter's example, you could explain that there's a linear relationship between human height and weight, so using linear regression rather than a non-linear model makes sense. You can prove this statistically because the variables involved don't violate the assumptions of linear regression. Even when statistics are on our side, you still ought to consult with the domain knowledge area involved in the use case. In this one, we rest assured, biologically speaking, because our knowledge of human physiology doesn't contradict the connection between height and weight.

Many machine learning models are inherently harder to understand simply because of the math involved in the inner workings of the model or the specific model architecture. In addition to this, many choices are made that can increase complexity and make the models less interpretable, from dataset selection to feature selection and engineering, to model training and tuning choices. This complexity makes explaining how it works a challenge. Machine learning interpretability is a very active area of research, so there's still much debate on its precise definition. The debate includes whether total transparency is needed to qualify a machine learning model as sufficiently interpretable. This book favors the understanding that the definition of interpretability shouldn't necessarily exclude opaque models, which, for the most part, are complex, as long as the choices made don't compromise their trustworthiness. This compromise is what is generally called post-hoc interpretability. After all, much like a complex machine learning model, we can't explain exactly how a human brain makes a choice, yet we often trust its decision because we can ask a human for their reasoning. Post-hoc machine learning interpretation is exactly the same thing, except it's a human explaining the reasoning on behalf of the model. Using this particular concept of interpretability is advantageous because we can interpret opaque models and not sacrifice the accuracy of our predictions. We will discuss this in further detail in Chapter 3, Interpretation Challenges.

By explaining the decisions of a model, we can cover gaps in our understanding of the problemits incompleteness. One of the most significant issues is that given the high accuracy of our machine learning solutions, we tend to increase our confidence level to a point where we think we fully understand the problem. Then, we are misled into thinking our solution covers ALL OF IT!

At the beginning of this book, we discussed how levering data to produce algorithmic rules is nothing new. However, we used to second-guess these rules, and now we don't. Therefore, a human used to be accountable, and now it's the algorithm. In this case, the algorithm is a machine learning model that is accountable for all of the ethical ramifications this entails. This switch has a lot to do with accuracy. The problem is that although a model may surpass human accuracy in aggregate, machine learning models have yet to interpret its results like a human would. Therefore, it doesn't second-guess its decisions, so as a solution it lacks a desirable level of completeness. and that's why we need to interpret models so that we can cover at least some of that gap. So, why is machine learning interpretation not already a standard part of the pipeline? In addition to our bias toward focusing on accuracy alone, one of the biggest impediments is the daunting concept of black-box models.

This is just another term for opaque models. A black box refers to a system in which only the input and outputs are observable, and you cannot see what is transforming the inputs into the outputs. In the case of machine learning, a black-box model can be opened, but its mechanisms are not easily understood.

These are the opposite of black-box models (see Figure 1.3). They are also known as transparent because they achieve total or near-total interpretation transparency. We call them intrinsically interpretable in this book, and we cover them in more detail in Chapter 3, Interpretation Challenges.

Explainability encompasses everything interpretability is. The difference is that it goes deeper on the transparency requirement than interpretability because it demands human-friendly explanations for a model's inner workings and the model training process, and not just model inference. Depending on the application, this requirement might extend to various degrees of model, design, and algorithmic transparency. There are three types of transparency, outlined here:

Opaque models are called opaque simply because they lack model transparency, but for many models this is unavoidable, however justified the model choice might be. In many scenarios, even if you outputted the math involved insaytraining a neural network or a random forest, it would raise more doubts than generate trust. There are at least a few reasons for this, outlined here:

Trustworthy and ethical decision-making is the main motivation for interpretability. Explainability has additional motivations such as causality, transferability, and informativeness. Therefore, there are many use cases in which total or nearly total transparency is valued, and rightly so. Some of these are outlined here:

Typically, machine learning models are trained and then evaluated against the desired metrics. If they pass quality control against a hold-out dataset, they are deployed. However, once tested in the real world, that's when things can get wild, as in the following hypothetical scenarios:

Any system is prone to error, so this is not to say that interpretability is a cure-all. However, focusing on just optimizing metrics can be a recipe for disaster. In the lab, the model might generalize well, but if you don't know why the model is making the decisions, then you can miss on an opportunity for improvement. For instance, knowing what the self-driving car thinks is a road is not enough, but knowing why could help improve the model. If, say, one of the reasons was that road is light-colored like the snow, this could be dangerous. Checking the model's assumptions and conclusions can lead to an improvement in the model by introducing winter road images into the dataset or feeding real-time weather data into the model. Also, if this doesn't work, maybe an algorithmic fail-safe can stop it from acting on a decision that it's not entirely confident about.

One of the main reasons why a focus on machine learning interpretability leads to better decision-making was mentioned earlier when we talked about completeness. If you think a model is complete, what is the point of making it better? Furthermore, if you don't question the model's reasoning, then your understanding of the problem must be complete. If this is the case, perhaps you shouldn't be using machine learning to solve the problem in the first place! Machine learning creates an algorithm that would otherwise be too complicated to program in if-else statements, precisely to be used for cases where our understanding of the problem is incomplete!

It turns out that when we predict or estimate something, especially with a high level of accuracy, we think we control it. This is what is called the illusion of control bias. We can't underestimate the complexity of a problem just because, in aggregate, the model gets it right almost all the time. Even for a human, the difference between snow and concrete pavement can be blurry and difficult to explain. How would you even begin to describe this difference in such a way that it is always accurate? A model can learn these differences, but it doesn't make it any less complex. Examining a model for points of failure and continuously being vigilant for outliers requires a different outlook, whereby we admit that we can't control the model but we can try to understand it through interpretation.

One crucial benefit of model interpretation is locating outliers. These outliers could be a potential new source of revenue or a liability waiting to happen. Knowing this can help us to prepare and strategize accordingly.

Trust is defined as a belief in the reliability, ability, or credibility of something or someone. In the context of organizations, trust is their reputation; and in the unforgiving court of public opinion, all it takes is one accident, controversy, or fiasco to lose substantial amounts of public confidence. This, in turn, can cause investor confidence to wane.

Let's consider what happened to Boeing after the 737 MAX debacle or Facebook after the 2016 presidential election scandal. In both cases, there were short-sighted decisions solely made to optimize a single metric, be it forecasted plane sales or digital ad sales. These underestimated known potential points of failure and missed out entirely on very big ones. From there, it can often get worse when organizations resort to fallacies to justify their reasoning, confuse the public, or distract the media narrative. This behavior might result in additional public relations blunders. Not only do they lose credibility with what they do with their first mistake but they attempt to fool people, losing credibility with what they say.

And these were examples of, for the most part, decisions made by people. With decisions made exclusively by machine learning models, this could get worse because it is easy to drop the ball and keep the accountability in the model's corner. For instance, if you started to see offensive material in your Facebook feed, Facebook could say it's because its model was trained with your data such as your comments and likes, so it's really a reflection of what you want to see. Not their faultyour fault. If the police targeted your neighborhood for aggressive policing because it uses PredPol, an algorithm that predicts where and when crimes will occur, it could blame the algorithm. On the other hand, the makers of this algorithm could blame the police because the software is trained on their police reports. This generates a potentially troubling feedback loop, not to mention an accountability gap. And if some pranksters or hackers eliminate lane markings, this could cause a Tesla self-driving car to veer into the wrong lane. Is this Tesla's fault that they didn't anticipate this possibility, or the hackers', for throwing a monkey wrench into their model? This is what is called an adversarial attack, and we discuss this in Chapter 13, Adversarial Robustness.

It is undoubtedly one of the goals of machine learning interpretability to make models better at making decisions. But even when they fail, you can show that you tried. Trust is not lost entirely because of the failure itself but because of the lack of accountability, and even in cases where it is not fair to accept all the blame, some accountability is better than none. For instance, in the previous set of examples, Facebook could look for clues as to why offensive material is shown more often, then commit to finding ways to make it happen less even if this means making less money. PredPol could find other sources of crime-rate datasets that are potentially less biased, even if they are smaller. They could also use techniques to mitigate bias in existing datasets (these are covered in Chapter 11, Bias Mitigation and Causal Inference Methods). And Tesla could audit its systems for adversarial attacks, even if this delays shipment of its cars. All of these are interpretability solutions. Once a common practice, they can lead to an increase in not only public trustbe it from users and customers, but also internal stakeholders such as employees and investors.

Due to trust issues, many AI-driven technologies are losing public support, to the detriment of both companies that monetize AI and users that could benefit from them (see Figure 1.4). This, in part, requires a legal framework at a national or global level and, at the organizational end, for those that deploy these technologies, more accountability.

There are three schools of thought for ethics: utilitarians focus on consequences, deontologists are concerned with duty, and teleologicalists are more interested in overall moral character. So, this means that there are different ways to examine ethical problems. For instance, they are useful lessons to draw from all of them. There are cases in which you want to produce the greatest amount of "good", despite some harm being produced in the process. Other times, ethical boundaries must be treated as lines in the sand you mustn't cross. And at other times, it's about developing a righteous disposition, much like many religions aspire to do. Regardless of the school of ethics we align with, our notion of what it is evolves with time because it mirrors our current values. At this moment, in Western cultures, these values include the following:

Ethical transgressions are cases whereby you cross the moral boundaries that these values seek to uphold, be it by discriminating against someone or polluting their environment, whether it's against the law or not. Ethical dilemmas occur when you have a choice between options that lead to transgressions, so you have to choose between one and another.

Since the first widely adopted tool made by humans, it brought progress but also caused harm, such as accidents, war, and job losses. This is not to say that technology is always bad but that we lack the foresight to measure and control its consequences over time. In AI's case, it is not clear what the harmful long-term effects are. What we can anticipate is that there will be a major loss of jobs and an immense demand for energy to power our data centers, which could put stress on the environment. There's speculation that AI could create an "algocratic" surveillance state run by algorithms, infringing on values such as privacy, autonomy, and ownership.

The second reason is even more consequential than the first. It's that machine learning is a technological first for humanity: machine learning is a technology that can make decisions for us, and these decisions can produce individual ethical transgressions that are hard to trace. The problem with this is that accountability is essential to morality because you have to know who to blame for human dignity, atonement, closure, or criminal prosecution. However, many technologies have accountability issues to begin with, because moral responsibility is often shared in any case. For instance, maybe the reason for a car crash was partly due to the driver and mechanic and car manufacturer. The same can happen with a machine learning model, except it gets trickier. After all, a model's programming has no programmer because the "programming" was learned from data, and there are things a model can learn from data that can result in ethical transgressions. Top among them are biases such as the following:

Interpretability comes in handy to mitigate bias, as seen in Chapter 11, Bias Mitigation and Causal Inference Methods, or even place guardrails on the right features, which may be a source of bias. This is covered in Chapter 12, Monotonic Constraints and Model Tuning for Interpretability. As explained in this chapter, explanations go a long way in establishing accountability, which is a moral imperative. Also, by explaining the reasoning behind models, you can find ethical issues before they cause any harm. But there are even more ways in which models' potentially worrisome ethical ramifications can be controlled for, and this has less to do with interpretability and more to do with design. There are frameworks such as human-centered design, value-sensitive design, and techno moral virtue ethics that can be used to incorporate ethical considerations into every technological design choice. An article by Kirsten Martin ( also proposes a specific framework for algorithms. This book won't delve into algorithm design aspects too much, but for those readers interested in the larger umbrella of ethical AI, this article is an excellent place to start. You can see Martin's algorithm morality model in Figure 1.5 here:

Organizations should take the ethics of algorithmic decision-making seriously because ethical transgressions have monetary and reputation costs. But also, AI left to its own devices could undermine the very values that sustain democracy and the economy that allows businesses to thrive.

When you leverage previously unknown opportunities and mitigate threats such as accidental failures through better decision-making, you can only improve the bottom line; and if you increase trust in an AI-powered technology, you can only increase its use and enhance overall brand reputation, which also has a beneficial impact on profits. On the other hand, as for ethical transgressions, they can be there by design or by accident, but when they are discovered, they adversely impact both profits and reputation.

When businesses incorporate interpretability into their machine learning workflows, it's a virtuous cycle, and it results in higher profitability. In the case of a non-profit or governments, profits might not be a motive. Still, finances are undoubtedly involved because lawsuits, lousy decision-making, and tarnished reputations are expensive. Ultimately, technological progress is contingent not only on the engineering and scientific skills and materials that make it possible but its voluntary adoption by the general public.

Upon reading this chapter, you should now have a clear understanding of what machine learning interpretation is and isn't, and recognize the importance of interpretability. In the next chapter, we will learn what can make machine learning models so challenging to interpret, and how you would classify interpretation methods in both category and scope.

Serg Mass has been at the confluence of the internet, application development, and analytics for the last two decades. Currently, he's a Climate and Agronomic Data Scientist at Syngenta, a leading agribusiness company with a mission to improve global food security. Before that role, he co-founded a startup, incubated by Harvard Innovation Labs, that combined the power of cloud computing and machine learning with principles in decision-making science to expose users to new places and events. Whether it pertains to leisure activities, plant diseases, or customer lifetime value, Serg is passionate about providing the often-missing link between data and decision-making and machine learning interpretation helps bridge this gap more robustly.

Serg Mass has been at the confluence of the internet, application development, and analytics for the last two decades. Currently, he's a Climate and Agronomic Data Scientist at Syngenta, a leading agribusiness company with a mission to improve global food security. Before that role, he co-founded a startup, incubated by Harvard Innovation Labs, that combined the power of cloud computing and machine learning with principles in decision-making science to expose users to new places and events. Whether it pertains to leisure activities, plant diseases, or customer lifetime value, Serg is passionate about providing the often-missing link between data and decision-making and machine learning interpretation helps bridge this gap more robustly.

sand plotter built with 3d printer parts | hackaday

sand plotter built with 3d printer parts | hackaday

Sand plotters are beautiful machines. They can make endless patterns, over and over again, only to wipe away their own creation with each new pass. Having seen the famous Sisyphus sand sculpture online, [Simon] decided to make his own.

The build came together quickly, thanks to [Simon]s well-stocked workshop and experience with CNC motion platforms. The frame was built out of wood, with a combination of hand-cut and lasercut parts. After fabric-wrapping the outer rim turned out poorly, rope was substituted instead for a stylish, organic look. LEDs were installed inside to light the sand for attractive effect. The metal ball is moved through the sand via a magnet attached to an XY platform mounted on the back of the table. The platform is built out of old 3D printer parts, with a Creality CR10S Pro chosen for its ultra-quiet stepper drivers. Initial attempts to make the system near-silent were hung up by the crunching sound of the ball rolling over the sand; this was fixed by instead mounting the ball on a foam pad. While the ball is now dragged instead of rolling, the effect is one of blissful quiet instead of crunching aggravation.

The final build is incredibly attractive, and something wed love to have as a coffee table as a conversation piece. Weve seen [Simon]s work around here before, too with the water-walking RC car a particular highlight. Video after the break.

Nice, I did something similar but had a case of feature creep, so it has a retractable magnet on the top, pen plotter on one side, and laser on the other. I can use it to move around board game pieces on the top and did a little self playing roguelike pirate ship game. I am horrible at writing any of my projects up though, so I should get round do doing that some day. I also had issues with the ball getting clogged in the sand, so the foam idea might be a good idea to try.

If the ball gets stuck, the bottom of the table is too thick, the magnet is too weak, or the sand is too deep. I have been working with these tables for about 2 years and tested all sorts of things and found that sand depth of 1/4 to 1/3 the ball diameter is best for detailed patterns and minimal problems with the ball getting buried. I use a 1 cube N52 magnet and it keeps the ball moving even with a 1/2 plywood bottom on the table and with acceleration of 20k mm/sec^2 and speeds up to 2k mm/sec (the limits of my testing, not failed).

Oh nice, thanks! Yea I think its the magnet underneath that was my issue, as I needed to be able to retract it so that it wouldnt move any of the other game pieces around, so Im just using a little neodymium magnet underneath. I will have a play with a larger magnet and see how that goes. Which servos are you using for it? driven by a Duet 2 WiFi controller. WiFi capability means I dont have to have a control panel on the table. It can be programmed to run a specified series of patterns automatically at power-up.

Oh cool, thank you, Ive never played with servos that size before. Yea I have a duet in one of my printers. Really nice board but pretty pricy. Looks to work great on your table, so I need to have a play with this setup.

I wonder if to erase everything quickly at once you could just attach an offset weight to a motor and vibrate everything to a level surface. Now I wanna build one but I most definitely would have nowhere to put it unless I make it fairly small. Too many projects, too little space/time/money

the solar sinter by markus kayser | dezeen

the solar sinter by markus kayser | dezeen

In a world increasingly concerned with questions of energy production and raw material shortages, this project explores the potential of desert manufacturing, where energy and material occur in abundance.In this experiment sunlight and sand are used as raw energy and material to produce glass objects using a 3D printing process, that combines natural energy and material with high-tech production technology.Solar-sintering aims to raise questions about the future of manufacturing and triggers dreams of the full utilisation of the production potential of the worlds most efficient energy resource - the sun. Whilst not providing definitive answers, this experiment aims to provide a point of departure for fresh thinking.

In the deserts of the world two elements dominate - sun and sand. The former offers a vast energy source of huge potential, the latter an almost unlimited supply of silica in the form of quartz. Silicia sand when heated to melting point and allowed to cool solidifies as glass. This process of converting a powdery substance via a heating process into a solid form is known as sintering and has in recent years become a central process in design prototyping known as 3D printing or SLS (selective laser sintering). These 3D printers use laser technology to create very precise 3D objects from a variety of powdered plastics, resins and metals - the objects being the exact physical counterparts of the computer-drawn 3D designs inputted by the designer. By using the suns rays instead of a laser and sand instead of resins, I had the basis of an entirely new solar-powered machine and production process for making glass objects that taps into the abundant supplies of sun and sand to be found in the deserts of the world.

My first manually operated solar-sintering machine was tested in February 2011 in the Moroccan desert with encouraging results that led to the development of the current larger and fully automated computer-driven version - the Solar-Sinter. The Solar-Sinter was completed in mid-May and later that month I took this experimental machine to the Sahara desert near Siwa, Egypt, for a two week testing period. The machine and the results of these first experiments presented here represent the initial significant steps towards what I envisage as a new solar-powered production tool of great potential.

A large Fresnel lens (1.4 x 1.0 metre) is positioned so that it faces the sun at all times via an electronic sun-tracking device, which moves the lens in vertical and horizontal direction and rotates the entire machine about its base throughout the day. The lens is positioned with its focal point directed at the centre of the machine and at the height of the top of the sand box where the objects will be built up layer by layer. Stepper motors drive two aluminium frames that move the sand box in the X and Y axes. Within the box is a platform that can move the vat of sand along the vertical Z axis, lowering the box a set amount at the end of each layer cycle to allow fresh sand to be loaded and levelled at the focal point.

Two photovoltaic panels provide electricity to charge a battery, which in turn drives the motors and electronics of the machine. The photovoltaic panels also act as a counterweight for the lens aided by additional weights made from bottles filled with sand.

The machine is run off an electronic board and can be controlled using a keypad and an LCD screen. Computer drawn models of the objects to be produced are inputted into the machine via an SD card. These files carry the code that directs the machine to move the sand box along the X, Y coordinates at a carefully calibrated speed, whilst the lens focuses a beam of light that produces temperatures between 1400C and 1600C, more than enough to melt the sand. Over a number of hours, layer by layer, an object is built within the confines of the sand box, only its uppermost layer visible at any one time. When the print is completed the object is allowed to cool before being dug out of the sand box. The objects have rough sandy reverse side whilst the top surface is hard glass. The exact colour of the resulting glass will depend on the composition of the sand, different deserts producing different results. By mixing sands, combinatory colours and material qualities may be achieved.

In this first instance the creation of artefacts made by sunlight and sand is an act of pure experimentation and expression of possibility, but what of the future?I hope that the machine and the objects it created, stimulate debate about the vast potential of solar energy and naturally abundant materials like silica sand. These first experiments are simply an early manifestation of that potential.

In the context of a desert-based community, the Solar-Sinter machine could be used to create unique artefacts and functional objects, but also act as a catalyst for solar innovation for more prosaic and immediate needs. Further development could lead to additional solar machine processes such as solar welding, cutting, bending and smelting to build up a fully functioning solar workshop.

The vibrant and global open-source community is already active in developing software and hardware for 3D printers and could play a key role in the rapid development of these technologies. The Solar-Sinter could simply be the starting point for a variety of further applications.

In 1933, through the pages of Modern Mechanix magazine, W.W. Beach was already imagining canals and "auto roads melted into the desert using sunlight focused through immense lenses. This fantastical large-scale approach is much closer to reality today, with desert factories using sunlight as their power a tangible prospect. This image of a multiplicity of machines working in a natural cycle from dusk till Dawn presents a new idea of what manufacturing could be.

The objects could be anything from glass vessels to eventually the glass surfaces for photovoltaic panels that provide the factories power source and, as Mr. Beach imagined 78 years ago, the water channels and glass roads that service them.

Experiments in 3D printing technologies are already reaching towards an architectural scale and it is not hard to imagine that, if partnered with the solar-sintering process demonstrated by the Solar-Sinter machine, this could indeed lead to a new desert-based architecture.

Dezeen Weekly is a curated newsletter that is sent every Thursday, containing highlights from Dezeen. Dezeen Weekly subscribers will also receive occasional updates about events, competitions and breaking news.

We will only use your email address to send you the newsletters you have requested. We will never give your details to anyone else without your consent. You can unsubscribe at any time by clicking on the unsubscribe link at the bottom of every email, or by emailing us at [email protected]

Dezeen Weekly is a curated newsletter that is sent every Thursday, containing highlights from Dezeen. Dezeen Weekly subscribers will also receive occasional updates about events, competitions and breaking news.

We will only use your email address to send you the newsletters you have requested. We will never give your details to anyone else without your consent. You can unsubscribe at any time by clicking on the unsubscribe link at the bottom of every email, or by emailing us at [email protected]

Related Equipments