Skip to content

Being a Freelance Makeup Artist

Freelance makeup artists do not work for any particular beauty salon on a permanent basis. Rather, they are self-employed.

Well, there is no better feeling than being a stylist and being a part of someone’s special moments.

It has become very common nowadays where individuals choose to be an independent freelance makeup artist where they can choose their own protocols, own working hours, own products and of course their own price.

Fields like Film industry, Theatre, Wedding industry require Freelance makeup artists.

Now, if you are someone who is really into being A freelance makeup artist then there are certain things that you can’t rule out if you wanna get the best out of your career!

Skills Required

Zeal and Dedication

Passion is the most crucial prerequisite when we are talking about a career. When we are passionate 

Our vision towards the world changes. Passion is what makes you unique and holds you in a good position when compared to others. Your dedication is the key that will unlock the door to your success.

When you are filled with energy a positive vibe runs through your veins that will help you step ahead towards a successful career.

Social Graces

Good communication skills help the person to build strong and healthy contacts with the clients and will ultimately add on to your personality so that you can feel confident enough while dealing with customers.

Communication skills are required in every aspect of the profession especially when you are taking an individual stand.

When you will be compared with the other artists in your vicinity your social grace is what will make you different and better than them.

Self – Esteem

Remember not every trial is guaranteed 100% success! There will be times when you will face failures, dejection but that of time you must believe in yourself and continue to strive hard until you reach your destination.

You must believe in your capabilities and start over fresh. No one would be able to knock you down if you have a strong positive outlook.

Stepwise Guide To Head Towards Your Goal

We hope you would be acquiring all the skills listed above that are a must in your freelancing business.

Now listed below are certain points that you must focus on if you are a to be freelance makeup artist-

The right training

The beginning of every destination counts!

You can’t rule out the fact that there are budding makeup artists popping out every day with this. You can imagine the competition that will be coming your way while setting your career in this field.

 When you are thinking of building your career in this stream the most important requirement that needs to be fulfilled is that you must secure a degree in ICI Diploma in Beauty Therapy & Makeup.

 Don’t forget Certification counts! And if your trainer is a dominating personality that your pricing and popularity increase naturally.

A Captivating Portfolio

This step cant be missed,  you might remember that “ A Picture speaks a thousand words” 

Yeah! Of course, it is a fact. Never stop practicing, keep trying your hands on your family members, relatives, and friends this will enhance your skills and you are able to cover up every detail of your work.

Capture beautiful samples of your talent and upload them to your portfolio. This will not only help your talent to reach out to people but will also cheer you up!

 This step requires a lot of perseverance as you need to capture every fine detail of your work either while styling a model or a bride.

 You can even hire a professional photographer for this work.

Quality of Products

One of the finest details to focus on when you are up to your career building in makeup.

Remember quality matters, this step will not only require your monetary investment but also your time so that you can pick up the best for your clients.

Starting from your beauty blender to setting spray everything will count in binding your customer to you so that they can never think of another name except you.

Pricing

When it comes to the setting of the price it’s an arduous job!

Every neophyte stays in a dilemma about which prices to set if you keep the price too low you will sound “Unprofessional” and if you keep the prices too high then you might risk your clients out of your service.

Thus, after doing enough digging about the price levels of the competitors you can decide the optimum charge.

Website or Blogging

 With the outline of your fabulous work and a series of handpicked pictures of customers, you can build a website or you can even start by writing blogs that show how you are different from others in the competition.

Your website should be catchy enough to catch the sight of visitors with the key points highlighting your talent.

Reach our local professionals 

As a noob, you might find it difficult to establish in that case you can start by visiting a local professional for a styled shoot.

Maybe you won’t be paid for your work but the best part is you will make enough contacts in the industry and with professionals that your establishment will become easy as pie!

Attend makeup artist workshops or trade shows

It is very important to update yourself with the latest trends in makeup so that you are never left behind. 

You can watch the latest videos of the makeup artists that will help you remain up to date.

You can even purchase a professional makeup kit at discount.

Moreover, when you will attend the workshops you will get to know many makeup hacks that will ultimately enhance your skills!

Confidence and Support

Last but not least believe in yourself and be confident in what you do.

A freelancer means you are a solo bird trying to fly high while competing with every other thing in the sky.

Life is full of ups and downs and it’s never completely a bed of roses. At times, you will feel low and disheartened whatever the reason may be, Consider your every downfall a push factor towards your goal.

Make the best out of every opportunity on your plate and you will find success waiting for you at the doorstep!

Guerilla Phenomenology

Art as Sensory Pharmakon & the Mass Media Overdose

The following presentation is a thought experiment that, in all seriousness, asks you to examine the nature of media. I honestly believe that social, political, and financial challenges, as well as the far more imposing ecological crisis, can be approached and potentially mitigated by our prompt, critical engagement with media. In that respect, much has been made lately of ‘knowledge mobilization.’ According to the Social Sciences and Humanities Research Council of Canada (a post-secondary federal funding agency), the aim of knowledge moblization is the completion of the circuit by which the academy gives back to the community that supports it.

https://web.archive.org/web/20150910113709im_/https://www.thedewlab.com/wp-content/uploads/2013/01/SSHRCKnowledgeMobilization3.jpg

From ‘SSHRC Knowledge Mobilization Strategy’

But the very phrase itself risks undermining its own agenda. For mobilization suggests the movement of resources, as in troops or goods; it’s as though universities are sitting on stockpiles of research that need only be made accessible to meet our obligation to the public at large. This is precisely the misguided, unidirectional thinking that I will be arguing against . For if universities are to fully embrace the admirable ideals behind knowledge mobilization, then they must be willing to change what they consider knowledge to be. To that end this talk will focus on the capacity for media to shape knowledge through an uncanny cause and effect process. In order to do that we will need to engage in a form of ‘guerilla phenomenology,’ as we look at art, mass media, and things in between as opportunities to explore sensory experiences.

Art Effect Before Mass Media Cause

Here we have a pointillist painting by Seurat of his mother dated 1883. And here we have a television image, approximately 1954.

https://web.archive.org/web/20150910113709im_/https://www.thedewlab.com/wp-content/uploads/2013/01/SeuratWithOldTV.jpg

This is a work by Paul Klee, expressionist/cubist painter, circa 1914. **** This is a defractor lens taken from an LCD projector in 2012.

https://web.archive.org/web/20150910113709im_/https://www.thedewlab.com/wp-content/uploads/2013/01/PaulKleeProjectorDiffuser_Small-1024x382.jpg

This footage is of Myron Krueger’s Videoplace [start video at 2:14, drawing on screen], which began to take shape in 1971.

And here is the Microsoft Kinect camera in 2010 [start video at 0:14 – kinectimals scene].

Art As Pharmakon

These examples are intended to illustrate that in terms of form and sense, art can and often does precede mass media. By this I mean that the pointillist painting, with its black and white dots forming a composite image in the mind of the viewer is similar in kind to the perceptual engagement with the low-fidelity television image with its array of monochrome dots. I am not claiming that Seurat anticipated the television nor am I saying that without pointillism TV wouldn’t exist. Instead I am claiming that the artist and the critically active observer of art engage with sensory experiences decades, sometimes centuries, prior to their deployment as tools of mass media. In this sense, each artwork contains its own cautionary tale in the form of a pattern known to be efficacious when put in contact with the human sensorium. This pattern is like a pharmakon: in small doses it builds an immunity, but in large amounts, it’s poisonous.

Tom is 54 years old. In 1970s he attended one of Myron Krueger’s Videoplace exhibitions and witnessed the potential for a computer to mediate the body. When Tom saw the first commercial for the Xbox Kinect he merely shrugged while others heralded the increased interactivity.

When Apple released their high-definition retina technology for their mobile devices, the iPhone and iPad, users and reviewers alike marvelled at the clarity of the image. Truly this was a step forward in mobile technology.

What I wish to highlight here is that whether interactive human-computer interface, gaming console, tablet, or cell phone, each medium relies on impressions made on the sensorium first and then, after the fact, a conclusion is drawn. Or as McLuhan would have it: “Effects are perceived, whereas causes are conceived” First the light reflects off a mess of black and white dots, striking the retina. Afterwards the cause is determined. You might say: ‘Surely it was a woman before I realized this fact.’ And just like that, a single cause is retroactively inserted. The image goes from points in human form to human in pointillist form. At issue here is that we produce media not according to their effects (what they can teach us) but as causes in themselves (heightened realism). By cause I mean a singular, localized source; the billiard ball that first sets the others in motion, the finger that pulls the trigger to fire the gun, the computer that starts the calculation. In other words, causes are reductions. Did the finger fire the gun? Or was it the firing pin that struck the bullet? Or perhaps it was the chemical reaction that actually launched the projectile? Ultimately, a cause is a series of effects reduced after the fact to a single interpretation.

Put differently, cause and effect is a linear interpretation of a decidedly non-linear occurrence. This is apparent in Merleau-Ponty’s characterization of classical science as “a form of perception which loses sight of its origins and believes itself complete” (Merleau-Ponty) .This applies especially to those empiricists who dedicate their lives to studying effects without any thought to the causality imposed by their observation techniques. Speaking on the subject of film in the late 1980s, Charles Eidsvik wrote that “The basic problem in theorizing about technical change…is that accurate histories of the production community and its perspectives, as well as of the technological options…must precede the attempt to theorize.” While on the topic of games Aphra Kerr asks “How can we talk with authority about the effects of digital games when we are only beginning to understand the game/user relationship and the degree to which it gives more creative freedom and agency to users?” With all due respect to these theorists, we must abandon this naive approach to media studies. For in essence Kerr is stating that we must wait for the results to emerge that fit our method of testing. With this mentality, it’s no wonder we are impotent in the face of the present ecological crisis. We have Geiger counters, carbon sensors, and emission testers; that is say, we have all the tools to tell us it’s too late, and few to prevent the actual event.

Events in the Gap

But the future is not something that happens to us. Our role as victims or beneficiaries of the passage of time is self-ascribed. It happens that this temporality is sustained and perpetuated by our media, which are decidedly linear in there causality. Consider how one reads a comic book. As our eyes move from panel to panel the mind fills in the missing information needed to complete the action. But despite the fact that the additional information is added last, the overall experience of reading the comic means that this third event occurs prior to the second. That is, the cause of the events in the second panel is determined AFTER the panel is experienced but we read the cause as taking place BEFORE.

https://web.archive.org/web/20150910113709im_/https://www.thedewlab.com/wp-content/uploads/2013/01/SamHill.jpg

If we explore this cause in detail, it is at once familiar to us and yet unknown; it soon becomes clear that bounded by the adjacent panels, events in the gap are indeterminate, irreducible but somehow already present. To read this form, ultimately, is to accept indeterminacy without having to pay homage to the undefined. My critique here is not aimed at comic books specifically but at specialized literacies in general, and especially those that follow from linear media. What we need to consider is the possibility of media that are not so successive, that are not so ostensibly progressive, where the only role of the reader/participant/gamer is to perpetuate the dialogue/the script/or the gameplay. But this notion of linearity still bears description. For a more literary example, consider the murder mystery where the disclosure of the killer’s identity rewrites the narrative. When the detective reveals that the butler did it, our minds return to his seemingly innocuous behaviour. Suddenly the butler’s absence from the dining hall now appears grossly suspicious, his phrasing now indicates a measure of foreknowledge about one of the murders, and his behaviour when one of the bodies was found which appeared at first reading to be genuine now smacks of pretense. Given this example perhaps it would be best to revisit the forefather of the detective genre.

Invisible/Familiar

In Edgar Allan Poe’s essay “Philosophy of Composition,” the American author offers reveals, in a remarkably upfront manner, his methodology:  ****“I prefer commencing with the consideration of an effect…I say to myself, in the first place, ‘Of the innumerable effects, or impressions, of which the heart, the intellect, or (more generally) the soul is susceptible, what one shall I, on the present occasion, select?’” While Poe’s ensuing example here is ‘The Raven,’ it was his short story series starring detective Auguste Dupin that popularized the writing technique of starting with a clever conclusion (the effect) and working backwards to the beginning where events were hardly so clear (the cause). In one particular tale, called “The Purloined Letter,” Poe describes the failure of the Parisian police in their task of retrieving a stolen letter. We are told at the opening of the tale that the police know who stole the letter and in fact, they even know which house it is in. But after exhaustive and creative endeavours to locate the hiding place, the police come up empty handed. And yet Poe’s detective extraordinaire, Dupin, locates the letter almost immediately. For it was hidden in plain sight, obscured by nothing other than its familiar placement amongst other letters on a bureau.

Familiarity plays a key role here. Just as we are familiar with how to read a comic by inserting the necessary action between panels, the police were trained to be familiar with petty criminals and the routine behaviour of a thief. We can call this familiarity ‘literacy.’ But like the dyslexia that follows speed reading, the more familiar we become, the more our expectations supersede reality. The pharmakon-quality of sensory patterns resurfaces. Our empirical methods, like those of the highly trained police officers in Poe’s story, fail to account for the simple fact that we cannot find what we do not expect to see. It’s true that test results may change our expectations of the future but by then it’s already the past. The question then becomes: Can we disrupt this dependency? And if so, how?

In my own work I see each medium as bringing with it a series of structural metaphors that facilitates communication across a culture. New media = new metaphors. In fact, we can push this even further to say that media modify the conditions of possibility within a culture. For instance, the computer, as metaphor, has radically altered not only how Westerners conceive of the human body but biology in general. It is not uncommon to hear biologists speak of the behaviour of microorganisms as running operating systems; and in fact it is difficult to conceive of the entire field of synthetic biology, the design and construction of biological devices and systems, without the computer as metaphor . Here we have an example taken from a synthetic biology competition.

https://web.archive.org/web/20150910113709im_/https://www.thedewlab.com/wp-content/uploads/2013/01/610px-UT_HelloWorld.jpg

This example illustrates that to a certain, unknown degree, media have a very real impact on what humans can conceive of and talk about. Recall that effects are perceived, whereas causes are conceived, and we can begin to see that each medium, in fact, instantiates its own form of causality when paired with an accompanying literacy. And to date our literacies mimic the linear causality of the media that produce them. How we read the pointillist painting and how we correlate movement with an on-screen avatar each requires our senses to connect visual, audile, and tactile information based on feedback from the medium. But like a linguistic form of quantum entanglement, as I speak this sentence each additional word modifies the meaning of the entire sentence such that it always already meant what the last word intended it to mean. The listener, as well as the reader, the film observer, and the gamer, is perpetually waiting for change to occur, for effects to reveal causes.

Feedforward Art

But as we have seen, the cause after the effect indicates a systemic blind spot that is filled in by mental processes. It is worth considering that the present array of social, political, and ecological crises have emerged from behind such blind spots. Thus, to await the effects of a medium, a program, a policy, or a chemical process, does less to guide future endeavours and more to expose the inadequacy of our current perceptual tools. The sheer magnitude on which humanity now operates condemns this feedback approach as a viable model for change. What, then, is the alternative? In response to feedback media, I would suggest that it’s feedforward art. It’s Seurat, and Klee, and Krueger as effects before their cause as mass media. It’s perpetually new forms of media immunizing us from long-term side effects of mass media. It’s about creating a dynamic array of perspectives such that no blind spot goes unexamined. It’s about moving from receptive to active media. It’s a full realization that the future of the future is the present. Let’s take what we’ve covered here and return to the notion of knowledge mobilization:  ****

“The relationship between knowledge mobilization, and outcomes and impacts is far from a simple question of “cause and effect” and, rather, more recursive…A social sciences and humanities voice on knowledge mobilization opens the door to non-linear, dialogical, discursive and multi-directional approaches with the general acknowledgement that all knowledge is “socially constructed” unlike the unidirectional “producer-consumer” implications of concepts such as knowledge and technology transfer.” (Social Sciences and Humanities Research Council of Canada Knowledge Mobilization Strategy)

Certainly there is some interesting language choice here. Yes, we need to move beyond the simple question of cause and effect and yes non-linearity will be key moving forward. But we must also accept that effects precede causes. And we must not mistake multi-directional or multi-vocal approaches as non-linear. The Canterbury Tales may provide stories from a range of classes but the medium itself is still fastidiously linear. Instead, the mobile element of knowledge mobilization should be directed less at the material and more at the intended recipients; in other words, what will mobilize the community? I think we can all agree that it isn’t open access to JSTOR articles.

Then what is it? I think it begins with non-linear media. And in that respect I can only offer three steps towards a definition, and they are small ones at that. What follows are three projects that focus on the computer specifically. This is because another stumbling block towards feedforward is the fact that we populate new media with old metaphors, such that when McLuhan’s observation that “Computers are still serving mainly as agents to sustain precomputer effects” remains as true today as when he uttered it over thirty years ago. And nowhere is this clearer than in the realm of videogames. Our gaming machines are populated with narrative effects borrowed from literature, film, radio, newspapers, and the stage. The unique effects that follow digital, ludic narratives are rarely realized and the following are hardly the best examples. But here are three attempts to realize that potential nonetheless. In terms of guerilla phenomenology, each game here explores an aspect of perception, starting with time**,** space**,** and then causality.

The Bureau:

https://web.archive.org/web/20150910113709im_/https://www.thedewlab.com/wp-content/uploads/2013/01/TheBureau.jpg
  • Based on Poe’s Purloined Letter and the idea that the desktop or bureau is no longer so quotidian
  • Events were populated in real-time. Some were written in real-time as well (i.e. cell phone conversation)
  • The story is about a serial killer who, it is revealed, claims his victims according to one of the most ancient patterns: the Sator Square. With murders that began in the 80s, the killer resurfaces in the present day, continuing the pattern of matching profession and initials in order to determine his victims. As a player we act as voyeur, watching the desk of FBI Agent Mike Kim as he attempts to solve the mystery. At the end of the game it is revealed that in fact there was no pattern to begin with. The murders in the 80s were found via pattern analysis on the part of the killer, as he played upon the FBI’s own obsession with pattern-analysis and profiling.
  • The objective here was that by telling the narrative in real-time and in such a confined manner that I was telling a story that was inseparable from the medium.
  • In terms of non-linearity, the persistent nature of the narrative actually disrupts the role of the player/reader as a necessary agent of action. And the fact that it was, in part, a performance, means that it can never be played again.

Antienvironment:

https://web.archive.org/web/20150910113709im_/https://www.thedewlab.com/wp-content/uploads/2013/01/Antienvironment-Animated.gif
  • This one is a little more straightforward to explain. Essentially, the history of videogames, like literature, has a series of styles that appear to succeed one another in terms of the experience of realism. Games offer a unique opportunity to debunk this myth of realism and mimesis
  • What the game involves is a series of visual and audile modes that increase in ‘fidelity’ throughout the game. Each successive level will appear to invalidate the preceding one as the so-called realism increases. But as all gamers know, an 8-bit game can be as engaging as the most technically advanced titles. Thus, players will meet a series of challenges, some of which will be easier to complete on lower fidelities.

Apophis:

https://web.archive.org/web/20150910113709im_/https://www.thedewlab.com/wp-content/uploads/2013/01/apophis-predictions-in.jpg
  • Apophis is a near-Earth asteroid that will approach Earth in 2029. Should it pass through a gravitational keyhole 800m wide, it will be on a collision course for the Earth in 2036. The current strategy to avoiding a catastrophic event is to detect such collisions 10 to 20 years in advance. The idea being that if you can strike the asteroid with a relatively small missile, the very minute course correction will compound over several decades, resulting in a miss.
  • Of course, this would involve governments cooperating and coordinating on a global scale about an event that is likely but not guaranteed to occur in the near future. That should sound rather familiar…
  • The game itself takes place over 30 days and involves playing a range of characters over those 30 days in a non-chronological fashion. The idea being that actions on day 3 may or may not impact events on day 29. What I hope to achieve is a separation of cause and effect, removing the simple pleasure we get from gaming, thereby increasing the pressure on the decision as the result will not be immediately apparent.

Introduction to Object Oriented Ontology

This introductory guide to Object Oriented Ontology is an on-going collection of central theses and surrounding debates. If you’d like to submit a text, blog, or discussion thread that you think is particularly useful to the would-be OOO scholar you can post a link in the comments or email me at mike@gauravtiwari.org

Introduction to Object Oriented Ontology

Getting Started

Definition

“Ontology is the philosophical study of existence. Object-oriented ontology (“OOO” for short) puts things at the center of this study. Its proponents contend that nothing has special status, but that everything exists equally—plumbers, cotton, bonobos, DVD players, and sandstone, for example. In contemporary thought, things are usually taken either as the aggregation of ever smaller bits (scientific naturalism) or as constructions of human behavior and society (social relativism). OOO steers a path between the two, drawing attention to things at all scales (from atoms to alpacas, bits to blinis), and pondering their nature and relations with one another as much with ourselves” – Ian Bogost

Antonym

OOO is not correlationist: “The correlationist strategy consists in demonstrating that the object can only be thought as it is given, and it can only be thought as it is given for a subject. In drawing our attention to givenness for a subject, correlationism thus demonstrates that we can never know what the object is in-itself, but only what it is for-us. In short, any truth one might articulate is not a truth of the world as it would be regardless of whether or not we exist, but only a truth for-us” – Levi Bryant

Explication

If you’re interested in OOO then there are few scholars better equipped to guide you than Levi Bryant, Ian Bogost and Timothy Morton. In 2010 all three scholars presented on a panel at the RMMLA conference in Albuquerque, New Mexico. This 90 minute recording offers an accessible introduction to OOO (Bryant – 0-36:37), the influence of OOO philosophy on scholarly practices (Bogost – 36:27-105:44), as well as the resonance between object oriented ontology and climate change (Morton – 105:44-134:31)

Direct Link [Download]

If you’re interested in the fundamentals of OOO look at Bryant’s essay “The Ontic Principle” found in The Speculative Turn (the free PDF version of which can be found here [Link]

Intrigued? Consider the books, blogs, and discussions below as a primer.

Book Oriented Ontology

The Democracy of Objects by Levi Bryant

Accessible. That’s the one word I would use to describe Bryant’s comprehensive expansion of OOO philosophy. Not only does Democracy of Objects make OOO accessible, Bryant even presents lucid readings of Žižek, Deleuze & Guattari, and Luhmann. And he does so while clearly distinguishing his own interpretations (Bryant’s attempt here is not to revise existing philosophy but, similar to Harman in Tool-Being, he is primarily defining OOO by what it is similar but not equal to). This past summer I read a number of OOO/speculative-realism texts  and I wish I’d started here.

Tool-Being by Graham Harman

Much of OOO scholarship attempts to demonstrate that philosophy (at least, the phenomenological/ontological variety) moves much too fast. For at high-speeds, a simple unexamined conclusion can propagate at an alarming rate. In such cases, the syllogism becomes an enthymeme and the solution that follows solves a false problem. This is one way of summarizing Harman’s Tool-Being, which claims that Heidegger committed an oversight early in his career that then propagated throughout the rest of his work. Harman revisits the tool/broken-tool dichotomy, reversing the enthymeme back into a syllogism to demonstrate just where Heidegger veered off course. The result is a slowing down of ontological inquiry. This simple change of pace suggests that other philosophers should look back, back towards the initial premises that establish the relationship between vorhandenheit and zuhandenheit.

Alien Phenomenology by Ian Bogost

One way to summarize this book would be to state that, in a way, Alien Phenomenology  is to the everyday visual experience as John Cage was to the everyday acoustic experience (bear with me!). Bogost draws heavily on photography, specifically the work of Stephen Shore, in revisiting the commonplace from the perspective of the objects themselves. Bogost uses Shore’s work to deconstruct the artifice of the constructed photograph, demonstrating the pervasiveness of meaning, as Cage did with his work on the fallacy of silence. Indeed, the fallacy that OOO wishes to expose is that of correlationism and so Bogost focuses on mediations of the everyday, such as McDonald’s packaging and the unique characteristics of light sensors in various digital cameras, in order to propose alternative phenomenologies, thereby decentering that of the human. The book as a whole challenges its readers not to adopt OOO wholesale but to become an explorer of the everyday, an amateur carpenter of commonplace things.

Blog Oriented Ontology

OOO is one of those rare projects to unfold, one might even say gestate, in the open forum of the world wide web. Below is an updated list of texts, blogs, articles, and interviews concerning the evolution of OOO.

Bryant is the most prolific of the OOO bloggers and so you might be a little overwhelmed by the sheer amount of content on Larval Subjects. Here are a few points of entry I found particularly inviting:

The Materiality of SR/OOO: Why Has It Proliferated?– Bryant looks at the rather rapid spread of OOO, its place outside of academic journals, and its use of social media + the blogosphere

Yellow Submarines and Operational Closure – OOO meets systems theory/autopoiesis

“Operational closure is not a happy thought. It presents us with a world in which we’re entangled with all sorts of entities that we can hardly communicate with yet which nonetheless influences our lives in all sorts of ways”

Is There a Politics of OOO? – The intersection between OOO and Ranciere’s political theory, where OOO de-centers the human

“OOO, by contrast, makes the strange claim that humans are objects among other objects (they have no ontologically privileged position and are not the crown of existence) and proposes the strange idea ofactive objects (objects that aren’t merely passive recipients of the acts of other entities but which are agencies in their own right).”

Bogost has a rather diverse CV and while OOO seems to inform most, if not all, of his other works, his blog is a hodge-podge of gaming, rhetoric, and philosophy. Here are some OOO-specific posts that introduce you to Ian’s thought:

Seeing Things – A video and transcript covering OOO, ontography, and the everyday

“Object-oriented ontology is thus not only the name for an ontology oriented toward objects, but a practice of learning how to orient toward objects ourselves. And, mise-en-abyme-like, how to orient toward object-orientation.”

OOO and Politics – A pointed response to the criticism that OOO 1) downplays ethics, especially human-centered ethical conundrums, and 2) that OOO scholars have failed to grapple with human relationships

“Sometimes I regret having gotten back into the “traditional humanities” after spending the last ten years in a weird hybrid of liberal arts and engineering at a technical institute. For it deals with the greatest irony of conservatism: a conservatism whose hallowed tradition is a purported progressive radicalism. Things are changing in philosophy, and that change is terrifying to some and liberating to others—perhaps it should be both”

Morton’s blog offers an immense amount of OOO content, so much in fact that this guide is, in large part, an attempt to collate and organize large portions of that rich repository of philosophical material. That said, Morton’s site is a veritable treasure trove with many hidden gems, like past talks or links to free(!) ebooks. What’s more, its frequently updated and includes past, present, and future talks from Morton, as well as previously taught grad courses.

I find it quite useful to observe any idea or philosophy taking root in someone’s thoughts, and that’s what Andre Ling offers when he blogs about object oriented ontology:

Others

Literature Oriented Ontology

Graham Harman, Timothy Morton, and Jane Bennett discuss OOO and literature, all within a single issue of The New Literary History. Special thanks to Kris Coffield over at Fractured Politics [link] for collecting the articles.

“The Well-Wrought Broken Hammer: Object-Oriented Literary Criticism” by Graham Harman

Summary: In typical Harman fashion, OOO is discussed in terms of how it is similar to but fundamentally distinct from three influential theories of literary criticism: New Criticism, New Historicism, and Deconstructionism. Harman then proceeds to outline an OOO form of literary criticism, advocating in the process a kind of understanding through deformation–that is, a form of analysis where the critic actively distorts a text in various ways until it becomes unrecognizable as that text. The point being that changing a single word (say, a typo) has little impact on the identity of Moby-Dick (to borrow Harman’s example) but what about changing the name of a character? The omission of a particular conversation? A complete reorganization of events? In other words, Harman proposes analyzing literature by determining how much play we have with the parts before the whole constitutes a different object entirely. [PDF]

“An Object-Oriented Defense of Poetry” by Timothy Morton

Summary: Morton picks up Percy Shelley’s A Defence of Poetry in order to grapple with literature as a unique, nonhuman object. For Morton it is crucial that objects are not determined by a fixed frame or medium but rather they produce the frames in which they exist–they frame themselves in their own space-time. The connection here with literature is that poems, novels, biographies, etc. each create their own space-time through their unique narrativization of events (which reminds me of Shklovsky’s “Art as Technique”) . As with all OOO writing, Morton is concerned with describing a different kind of relation between an object and its parts. This begins by recalling that OOO steadfastly maintains that an object cannot be reduced to (or strictly constituted by) its relations–it will always escape any and all relationships, even from itself. Following this point Morton remarks that “We are too accustomed, argues OOO, to seeing things as patterns and not as objects” (219) (a statement that echoes Hayles’ pattern-over-presence argument in How We Became Posthuman). The distinction Morton is making is between the poem as effect and the poem as cause–if each text creates its own space-time, its existence as a unique object is overwritten or erased by suggesting it is merely a cultural product. Both Harman and Morton seek to allow texts to exists as texts; that is, to explore the effects of an object rather than the object as the site of inscription for the effects of other objects (namely, cultural, social, political, and psychological systems). [PDF]

“Systems and Things: A Response to Graham Harman and Timothy Morton” by Jane Bennett

Summary: Here Bennett offers a rebuttal against the preceding essays, defending systems, bodies, and relationism in general against OOO. Central to the debate here is where the scholarly gaze should be directed. OOO maintains that for too long we have remained fixated on relations, thereby denying the objects themselves their rightful ontological existence. In other words, relationality denies what I would call, following McLuhan, the effects of objects. Objects, like media, are often mistaken for their relations (i.e. content) and thus they risk being figured as blank slates on which culture is inscribe. Bennett is very much aware of this fear, this imbalance in scholarship, however, her response is to ensure that this  reaction isn’t an over reaction. [PDF]

Discussion Oriented Ontology

Here are some buzz words for you: speculative realism, object oriented ontology, new materialism, new aesthetics, correlationalism, continental philosophy. A Venn diagram of where these fields overlap would look more like a Rorschach test than anything else. Below is an on-going collection of conversations concerning OOO, its variants, detractors, and outright deniers.

OOQ – Object-Oriented-Questions” – Jussi Parikka

Bryant’s Response: Some Responses to Jussi

Bogost’s Response: Object-Oriented Answers

OOO U

OOO U (as in university) is a collection of podcasts, videos, and conference papers on object-oriented ontology. Think of it like a semester in the life of an object-oriented grad student, minus the assignments.

Coursework

During his spring 2012 term, Timothy Morton live-streamed and podcasted his grad course on object-oriented philosophy. Below are the eight classes that cover a wide-range of topics in, on, and around OOO. From armchair philosophers to dissertation drafters, the pacing and accessibility of the material, as imparted by Morton and his students, is ideal for any interested party.

Class 1: Introduction

[Embedded Audio/Video] [Download Audio: Part 1] [Part 2]

Class 2: Correlationism

[Embedded Audio/Video] [Download Audio: Part 1] [Part 2]

Class 3: Phenomenology and the Thing

[Embedded Audio/ Video] [Download Audio]

Class 4: Husserl, Heidegger, and the Descent

[Embedded Audio/Video] [Download Audio]

Class 5: Withdrawal

[Embedded Audio/Video] [Download Audio]

Class 6: Causality

[Embedded Audio/Video][Download Audio]

Class 7: Flat Ontologies

[Embedded Audio/ Video] [Download Audio]

Class 8: Materialisms

[Embedded Audio/Video] [Download Audio]

Conferences

A collection of conference papers presented on OOO. Unless otherwise stated, all recordings were made and uploaded by Timothy Morton.

Graham Harman – A History of Speculative Realism and Object-Oriented Ontology – [Download][Embedded]

In this paper Harman takes us through the philosophical origins of speculative realism and OOO, noting the shared frustration with continental philosophy and the inability to discuss realism in the face of materialism and other forms of correlationism. If the border between SR and OOO is at all fuzzy for you, Harman draws a clear distinction here. (Hello Everything Symposium 2010)

Timothy Morton – Sublime Objects – [Download][Embedded]

In this playful paper Morton uses a children’s tale to preface his talk on rhetoric and OOO. Here Morton proposes that OOO necessitates the inverse of Aristotle’s five-part structure of rhetoric (invention, arrangement, style, memory, and delivery), thereby elevating delivery in order to support the thesis that “rhetoric is what happens when there is an encounter between any object, that is, between alien beings.” The target here is the privileging of style over delivery and the limited role of rhetoric in philosophy. (McLuhanists will likely see this as a privileging of content over medium). (Hello Everything Symposium 2010)

Ian Bogost – Object Oriented Ontogeny – [Download Audio][Embedded]

The ontogeny presented here by Bogost is an autobiographical sketch of a visually-impaired father and a young man’s growing interest in quotidian objects. Over the course of the paper we encounter lives that are, for lack of a better phrase, object-oriented. This history provides a basis for Bogost’s object-oriented research, including this extended look at the Atari VCS (as discussed in detail by Bogost and Nick Montfort in Racing the Beam). There’s not a lot of new OOO here but Bogost’s self-reflection on his interest in the particularity of objects is refreshing and definitely worth a listen. (Hello Everything Symposium 2010)

Graham Harman – Real Objects and Pseudo-Objects: Remarks on Method (Starts at 28:46) – [Download][Embedded ]

Graham presented this paper on the same day as his “History of Speculative Realism” (above) and thus it’s more of an elaboration on those finer points of OOO not covered there. (Hello Everything Symposium 2010)

Levi Bryant – On the Reality and Construction of Hyperobjects with Reference to Class –[Download][Embedded]

Bryant extends the concept of Morton’s hyperobjects to readings of social classes, bringing together Sartre (antipraxis) and Latour (sociology of the social). Hyperobjects inherently refer to a scale of objects beyond human comprehension (in regards to their scope). The aim here is to 1) address the existence of class (Bryant uses Margaret Thatcher’s claim that ‘there are only individuals and families’ in order to counter that classes themselves are also individual objects) and 2) articulate the role of classes as objects that cannot be the sum of their technological/social/cultural parts but nevertheless influence and are influenced by these other objects. (Hello Everything Symposium 2010)

Timothy Morton – Dark Ecology – [Download][Embedded]

In this talk Morton sets his sights on the ‘anthropocene’ as he attempts to convey the nearly unfathomable reality that humans have profoundly altered the planet on a global, geological scale. To that end Morton remarks on the coincidence of the anthropocene (an event marked by a marked increase in carbon levels that began with the Industrial Revolution) with Kant’s Copernican revolution (whereby the study of the real is rendered inaccessible by the influence of the mind). Thus, just as human beings begin to alter their global environment, human inquiry begins to study the world as it exists for humans alone. For Morton, the ‘Copernican revolution’ is incapable of addressing the large-scale implications of human actions and thus “what is required then is a philosophy and a corresponding social ethics and politics that can think the nonhuman.” For those searching for an ethics to OOO, Morton makes a strong case that it meets this social and environmental exigency. (Lisbon 2012)

Katherine Hayles, Tim Morton, Patricia Clough, Katherine Behar – Object-Oriented Feminism – [Download][Embedded]

OOO and feminism seem like a perfect match. After all, OOO is about the irreducibility of an object a single ‘reading,’ the plurality of meanings that risk being erased if one single model overrides or underwrites all others. But you won’t find much Kristeva or Butler here. The OOF panels tend to avoid overt references to feminist theory. That said, this panel is well worth a listen. (SLSA 2012)

Graham Harman – Art and Paradox – [Watch]

Graham breaks down aspects of this philosophy in a very well paced and lucid presentation. More specifically, the origin of OOO in relation to ‘mathematism,’ ‘scientism,’ and postmodernism is discussed here, as well as an explication of sensual and real objects/qualities, in accordance with Harman’s Quadruple Object. (The Matter of Contradiction: Ungrounding the Object 2012)

Outside the Prison

The following is a brief, informal response to Noah Wardrip-Fruin’s The Prison-House of Data from the perspective of a PhD student in Digital Media Studies.

From August 2010 to August 2011 I worked for a software company that specialized in pattern-recognition and data-mining. The company’s objective was to produce software that collected and processed data in order to make a more efficient factory. As a tech writer and web designer, I was tasked with explaining this process to a range of audiences. But no matter who I was speaking to I always had to make it clear that the data-processing software was a supplement, useful only to a highly-trained specialist. This meant that the role of the software was simply to decrease the noise, enabling the specialist, over time, to identify information-rich patterns. Once a pattern was identified, it could be encoded into the system for future, automated detection.

Does that description sound familiar? Perhaps not, but it’s proven eerily similar to the characterization of the (digital) humanities in the mainstream press. Take for instance a recent article in The Wall Street Journal which opens with the question: “Can physicists produce insights about language that have eluded linguists and English professors?” The article proceeds to detail ‘Culturomics,’ a fledgling field described as “the application of data-crunching to subjects typically considered part of the humanities.” This sets up a rather depressing comparison: that the humanities (minus the digital) are essentially data-mining specialists of the kind mentioned above but sans the software. Put another way, the humanities are like ENIAC in relation to modern PCs; they are performing today’s tasks with yesterday’s tools. The corollary, then, is that the digital humanities realize their full potential by transforming themselves into software-enabled pattern-recognizers.

Noah Wardrip-Fruin follows up on this characterization in his recent Inside Higher Ed piece. In this case, it’s comments from within the humanities, made by Stanley Fish in The New York Times, that drive the debate:

“Stanley Fish wrote…that digital humanities is concerned with ‘matters of statistical frequency and pattern,’ and summarized digital humanities methodology as ‘first you run the numbers, and then you see if they prompt an interpretive hypothesis.’”

The article closes with Wardrip-Fruin proclaiming that “digital humanists must begin by recognizing and developing important areas of work, already part of the field’s history, that such conceptions marginalize.” Nevertheless, he admits that a portion of the humanities has always been about data-mining but that such a role has been altered by computational advances:

“Certainly a grasp of data — the historical record, our cultural heritage — is a great strength of the humanities. But in the digital world, the storage, mining, and visualization of large amounts of data is just one small corner of the vast space of possibility and consequence opened by new computational processes — the machines made of software that operate within our phones, laptops, and cloud servers”

As an early exemplar of where digital humanities should stake its claim, Wardrip-Fruin puts forward Phil Agre’s Computation and Human Experience as a text that “serves a primary humanities mission of helping us understand the world in which we live, while also helping reveal sources of recurring patterns of difficulty for computer scientists working in AI” [italics mine]. In other words, the digital humanist can help build a better machine.

And here’s where things get problematic for me. As a PhD student in digital media studies, I feel that such a role, defined (apparently) in order to provide some degree of relevance to the digital humanities, casts me as a glorified beta-tester. This is something I’ve been struggling with since I started my PhD. And so I’ve used the handful of conferences I’ve attended to criticize texts in the digital humanities whose aim seems to be largely to expose such ‘patterns of difficulty’ present in new media projects.

Yes, it’s true, for example, that VR systems do not take into consideration embodied aspects of human experience (Hansen’s Bodies In Code). But it’s also true that in two thousand years of Western literature, no texts were as ‘mimetic’ as those of the Modernists. Erich Auerbach exhaustively details this in Mimesis. However, at no point during that historical cataloging (data-mining, if you will) did Auerbach question mimesis (the recurrent pattern), either as an ideal or as a phenomenon.

Mimesis was, in a number of ways, composed entirely in a closed-system,1centered around a term that is never defined, perhaps because once it was examined, it would have collapsed. Instead, it’s used as the perspective from which all of Western literature is judged. And yet, mimesis (the concept) is the very unquestioned signifier that Derrida and Foucault and the ensuing postmodernist texts would proceed to problematize. Mimetic in relation to what? From whose perspective—the synaesthesiac? The blind? The deaf? It becomes clear that mimesis for Auerbach is largely a visual term, for those texts that lack visual similitude to reality are quickly discredited. Oddly enough, VR, with its own mimetic ambitions, has recently been criticized for focusing too much on the visual spectrum. But we are at risk of overlooking the moral of the postmodernist tale that states that no single encoding mechanism can achieve perfect, synchronized mapping with ‘reality.’

So while under the aegis of digital humanities we help debug AI programs, and isolate omissions in VR systems, and occasionally beta-test computer-science research projects, it’s worth asking: to what end? To build a better, more accurate simulation?

To reiterate, Wardrip-Fruin concludes his article by stating that “ digital humanists must begin by recognizing and developing important areas of work, already part of the field’s history, that such conceptions marginalize” [italics mine]. Is it not part of our history to recognize patterns so that we do not replicate them? Put differently, the question shouldn’t be ‘How can the digital humanities remain relevant?’ but rather ‘How can the digital humanities demonstrate the irrelevance of uncritical media (research)?’

In this, I can only speak for myself and for what I wish to pursue and achieve in my career. That includes the desire to be a countervailing force that follows from a firm belief that no matter how complex the encoding mechanism, the system will always fall short. And so I choose to characterize my role, as someone entering the digital humanities, as a critic and not as a product tester.

I’ll conclude by turning to the beginning of Wardrip-Fruin’s article. In those opening paragraphs he laments that the digital humanities don’t have a seat at the table set for computer-scientists and digital artists. For me, the absence is obvious; it’s because we simply don’t belong at that table. We’ll be seated at the next one over, eavesdropping on their conversations from time to time. And when we meet up at the after-party, we’ll tell them how they’re characterizing their work and how that’s similar to other patterns in other fields–perhaps pointing out that the visualization on the screen/lens cannot replace the embodied aspects of experience (Hansen), not to divert research into embodiment (a current trend) but to point out the limitations of all media.

To invoke another metaphor, our role is two-steps back from the screen, watching them watching it. The difference here is between recognizing patterns for the system (‘How can we enrich the viewer’s experience?’) and recognizing patterns of a pattern-recognition system (‘What is it about this medium that captures the attention of this audience?’ ‘What aspects of experience have been filtered out and how are the viewers compensating for such an absence?’). The former asks us to help fine-tune a medium, as though our participation in the conversation could perfect the VR system. The latter asks us to consider a question far better suited to the humanist: What kind of culture would produce such media?

Notes:

1. Auerbach, largely confined to Istanbul due to the Nazi regime in Germany, wrote much of the book from the limited selection of texts available at the Istanbul State University. In fact, Auerbach credits the existence of Mimesis to the ‘lack of a rich and specialized library.’ On this basis, he operated much like a data-processing program—a closed-system that analyzes only that data properly encoded for it to process, autopoietically consuming its output as input. Source