Art Computing Infrastructure

I’m redoing this with the design principle blurb here, and the hardware list and state of deployment on child pages, because I just did my first update.

Remember:

  • Implementation:  the hardware is the last and least important, which is why this post is easy and first up.
  • Constraints:  next in importance is location/infrastructure – power availability, power quality, distance, up/downlink, thermals, thermal fail modes, accessibility for maintenance.
  • Purpose:  most important is use case.  And for creation that spans digital to physical, ergonomics – which we’ll consider a physical use case descriptor – was the first principle of guidance.
    Even CPU choice flows from that.  But more on that later.  Of course, software tools dictate platform choice.

For now, keep in mind – the principle is to facilitate the user (me, or someone using this as a guideline – and caveat emptor certainly applies)

  1. in creating work in diverse media,
  2. but at first, digital art/photography/composite/painting in full and accurate deep color, reflecting my choices and errors
  3. and shortly after to assist in traditional methods (archiving negatives, digital negatives/hybrid (Piezography?), light/temp control for oil/ink retouching (Hue))
  4. down to eventually CNC (Handibot?) and 3D printing (???) for everything from
    • frames
    • camera repair
    • custom/purpose built camera manufacture
    • mixed media assembly
    • more
  5. to have multiple input modes (mouse, keyboard, touch, digitizer – not really enumerated here, yet)
  6. that are responsive and access (lots of RAM, SSD database, SSD + arrayed HDD tiered storage pools)
  7. to a secure, resilient, efficient archive of source material (multi-site mirrors, versioning, new filesystems, differential storage)
  8. with the requisite and stable network resources with minimal time investment (multiple IB+OOB LAN management, “gold” VM states)
  9. and to do it as cheaply as possible (liquidated/repairable enterprise gear, residential sites, virtualized services, unutilized workstations as VM failovers, whitebox)
  10. and learning a thing or two (and I have) while not burning out (me or the equipment – thus a lot of Haswell).

I’ll let you know how that last one goes.

So, just a copypasta of a text file, so for now, doesn’t use any nice minimal formatting, nor links to posts/explanatory pages.
Still, for clarity:

  • normal text is deployed and configured or nearly so – remember, this is the infrastructure for tasks; some of those projects are software implementations – and just maybe, development. (“work”)
  • italics is assembled and pre-deployment configured, currently at the primary site. (“needs work”)
  • italics and underlined is yet to be assembled, acquired, or determined. (“to be worked on”)

This document will be updated as things get shuffled.  In a deployed environment, a standard platform from chassis on up would be preferred to reduce complexity and maximize the value of testing boxes as backup hardware, but there is a lab/exploration aspect to this in terms of learning, plus in some ways, this is uncharted territory – no one takes a project management/enterprise toolkit approach to small LANs except for IT professionals who are working on their skill set, tinkering, or whose terabytes of data is oriented around media consumption, not production.  That changes three things:

  1. Their data is replaceable; this data, particularly in its source form, is not
  2. Their data is public and its value does not change upon distribution; I’ve had to build multi-site mirrored storage – a private cloud
  3. Much like the generic devices supported by ESXi, the hardware is fungible if it provides cycles/storage/traffic; I rely on specific models of niche tools

This isn’t to knock them – I’ve benefitted greatly from sites like servethehome and smallnetbuilder even if I mostly lurk.  But while I share some background and tinkering/lab intent with them, that is distinctly secondary to my purposes.  Years ago I had everything from the Old/New World hybrid G3 to an AS/400 in my small apartment; I had something like 18 platforms on six processor architectures.  I wish I still could.  I loved it.

But that had to go: my primary purpose dictates what has to be, and I have to be careful not to let the tinker lab cloud (pun mildly intended) my purpose.
If nothing else, it would be bad IT practice.

That said, I’ve been approached a few times in the last half year for doing something similar for others, not mentioning this anywhere but my personal Facebook account.  Learning and swapping between all these (board * chassis * disk) setups could provide four or five common blocks to customize and deploy as a consultant/integrator.  Even had a chat with a friend who does (very turnkey) telco/PBX/LAN integration at the SMB level, who even offered to refer prospective clients – architects, studios and the like – to me because turnkey is all well and good for POS or office computing, but doesn’t help those (of us) with $3,500 Cintiqs.

But that’s not my primary concern – it can’t be: though most of my last half year has been spent learning/experimenting to build this setup, it was absolutely necessary insofar as my art, to which I need to return.   This deployment was an element of that, not an end unto itself.

But it would be nice for this work to have more use than “what needed to be done,” especially because of the cost in time and art not done.

Which, in part, is why I’m putting this page up.

Whew.  This all just popped out of me.  Anyway.  Here we go:

June 2014

April 2014

Comments

comments

Leave a Reply