Friday, 10 May 2019

Liang–Barsky Line Clipping And Why You Need Not Bother

I spent numerous hours working out how to efficiently clip strands/polylines in VEX for a project with dense strand meshes. I thought I'd been really clever implementing the Liang-Barsky algorithm on the NDC cube and then I stumbled into the problem of how to interpolate all existing attributes at the intersection points. Asking for help on various forums - Matt Estela pointed out that the same effect could be achieved with four Clip SOPs in NDC space with the additional bonus of 'free' attribute interpolation. I can't bear the thought of all that time going to waste so here is a vex implementation of Liang-Barsky - maybe someone can find some kind of use for it!

File is here.

Monday, 6 May 2019

Cantor Pairing For Impact Data And Generating Dynamic Bullet Constraints

There have been some fantastic resources for demonstrating how to create dynamic constraints in Bullet - most notably Rich Lord's stuff and, as usual, examples on Matt Estela's site.

The process, however, of establishing which collisions took place between which objects can be quite tricky and intensive - particularly as the Impact Data which records this stuff contains many duplicates of impacts between the same objects across multiple substeps. Usually you'd use a few For Loops to rationalise and structure this data.

The scene below uses a simple pairing formula called Cantor Pairing to essentially encode a collision between a pair of objects into a single number - it's then very easy to see if that collision has happened before and/or to remove duplicates of that collision. It also seems to be quite fast.

The sample scene does that encoding and then establishes constraints dynamically at impact points. I think (hope) it's as simple as it can get but, as always with Houdini, there are probably better ways.


Sunday, 5 May 2019

Weighted Average of Quaternions Without Flipping?

If you look around the Internet for the best mechanism to create an average quaternion, and particularly a weighted average of multiple quaternions you will almost always end up at this pdf from NASA.

A simpler way of averaging quaternions is possibly just to sequentially lerp or slerp between pairs of quaternions and adjust the weights at each step but this may or may not be accurate depending on the order in which you select the pairs. There's some debate on the best way to do this.

An additional problem with simply lerping quaternions is that they have a “double-cover” property, where there are two different quaternions (negatives of each other) that represent the same 3D rotation.  Normal lerp between them is dependent on which of those 'doubles' the quaternion actually is - you can go the long way round or the short way round. Looking at the orientation in the viewport you have no idea which one is being used. If, say, you have two similar quaternions but one is using something close to the negative of the other you can get very fast rotations around 180 degrees which look like flips. Slerp solves this problem but only between pairs (as well as providing a constant velocity as the interpolant increases which normal lerp doesn't).

When you use primuv to interpolate quaternions across a polygon in Houdini, that interpolation is a simple lerp between the numbers in the quaternions - you cannot guarantee that the blend will take the shortest arc (in which case you'll get strange intermediate rotations).

I found some Python code for implementing the NASA paper and made a scene to compare and contrast the different types of interpolation. The NASA results do look way better than naive primuv-style lerp. But, since it's python/numpy it's not that fast.

File is here.

Saturday, 25 November 2017

Odds And Ends 02

This is a vex-driven cube fold/unfold using a hierarchy of matrices in a parent/child relationship. Crazily useless!

hipnc file

Odds And Ends 01

I thought I would collate together some of the scenes I've been experimenting on and posting on forums - more for my own reference than anything else!

The first is based on an ICE setup for grooming feathers that Psyop demoed a few years ago. It establishes a local reference frame on each point and then interpolates between multiple guide nulls to orient 'feathers' relative to that local orientation. I was hoping to use the new array slerp vex function introduced in 16.0/16.5 but I couldn't get predictable results from it so reverted back to a rough and ready 'piecewise' quaternion slerp to get a weighted average. I am absolutely positive there are much simpler ways in Houdini to achieve the same effect!

hipnc file

Saturday, 10 June 2017

Smoother Reference Frames on Curves

I used this paper on Parallel Transport Reference Frames and some great advice from Andy Nicholas (who did a better version in a tenth of the time!) to make a tool to construct smooth reference frames on curves with twist and rotation. Hipfile (NC) is here.

Monday, 20 February 2017

Houdini Branching Structures

First attempt to do some branching in Houdini. Nothing new really. There's a silent walk through of the main controls on Vimeo here. It's based on some of the concepts in Fabricio Chamon's brilliant Strand Tree ICE compound. It consists of a simple iterative paradigm where at each iteration n number of branches are generated and some kind of multiplier is worked into the length, width, angle etc. This multiplier can be attenuated by a 'reduction/increase' factor per iteration or by using a ramp. There are parameters to control the growth and colour with normalised attributes like distance, u etc and these are all left exposed.  I hope the video makes it clear. The 'to do' list is massive - proper topologically correct junctions (without hacking it with Fuse or Polygons to VDB) would be great! The full .hip file is here. Any tips most welcome - interface or performance enhancements!

Monday, 30 December 2013

Weighted Arrays

Often you need to 'weight' the items of an array so that you can achieve a certain ratio e.g for a particle cloud 25% red, 25% blue and 50% green. The archive below contains a couple of sample scenes that show one specific technique (borrowed from Greg Turk's Graphics Gems algorithm for weighting triangles by area) for weighting an array and a couple of compounds kindly made by Dan Yargici making it easy to set up the initial weighted array and then pull data from it. The weighting doesn't have to be normalised - any ratios will work. Download the compounds and scenes here.

Clone With Index

A few months back Graham Fuller posted a great tip about how to retrieve the clones of a given point on the xsi mailing list. In this post Gustavo asks how to enumerate the individual clones so you can handle them separately. This scene shows three methods - a couple of them use variants of Graham's method whilst one uses a simple loop. They're useful techniques for creating hierarchies of particles where numerous clones can act as 'children' of a master particle.

Thursday, 18 July 2013

Faster Polygon Islands

Eric Mootz's emTools  has a suite  of compounds to manipulate polygon islands with a particle cloud. They require a pre-calculated index array mapping vertices to their island index. Doing this in ICE directly is relatively slow - quite a bit of work has been done to get these ICE compounds as fast as possible (see Guillaume Laforge's original blog post and this thread on but a C++ ice node can usually run orders of magnitude faster. In the case of the node provided here it has proved to be as much as 10-15x faster than the ICE implementations (depending on the scene). I'm sure there's scope for more optimal C++ coding to make it faster still.

You can use the node to feed data into Eric's vertex island tools (sample scene in the .rar file, remember to install emTools first) or as a standalone utility node if you're manipulating islands in your own way. It takes geometry and point positions as inputs and can output the index array of points and their island index, a per point island index and an array of island centres.  It's been compiled against 2013 SP1 64bit. Source code is included with the addon.

jj_Island_Indexer 1.0.0

Tuesday, 2 July 2013

Dart Throw Multiple Size

Dart Throw has been updated to support an input array of multiple sizes (as well as continuing to support a single input size). You can now instance a group of differently-sized objects onto your particles by creating an array of the sizes of the objects in the group. Numerous other tweaks and upgrades have been added:
  • Input an array containing multiple sizes and darts will match those sizes
  • Weight map based size adjustment now supports either absolute or scaled size
  • Randomising now supports negative variance and either absolute or scaled size
  • A new 'min size' parameter lets you control/limit the minimum size
  • Size adjustments via weights and/or randomisations can be applied to the array of input sizes
  • source code is now included in the src directory of the addon
  • Illustrative sample scenes are included in the archive
The new version of Dart Throw (2013 SP1+, 64bit)  can be downloaded here.

ICE Node Inputs

Usage Summary

This controls how many attempts are made to position a dart on the geometry. The more attempts you make the denser the packing becomes up to the point where it becomes virtually impossible for a dart to land on an empty space with sufficient room for its size.

If you look in the history log, Dart Throw reports something like the following: "Max iterations is: 41 at: 49990" (assuming an iterations setting of 50,000). This tells you that the maximum number of attempts it took to get a successful dart inserted was 41 and that happened at iteration 49990.

In a scenario where you get something like "Max iterations is: 101000 at: 899012" (assuming an iterations setting of 1m) you can see that it's taking over 100,000 attempts just to get one dart positioned. It's probably not worth waiting for that 1 extra dart and so you could reduce your iterations to 899011. In the Misc section you can use an 'Iterations Abort Threshold' parameter to set a value for how many unsuccessful dart throwing attempts the node should make before abandoning automatically. See that section below for detailed instructions.

You can input either an array or a single value here and Dart Throw will generate 'darts' that only use those size(s). The size array does not need to be in any particular order which means you can put your objects into your instance group any way you see fit.

Size Adjustment

Two different instance types with sizes adjusted by weightmap.

If you plug a weight map into the size map port you can control the size of your particles using the values in the map as an interpolant. What that means is that the weight map value between 0-1 will interpolate between the Basic::Size parameter you entered above and the Size Adjustment::Adjusted Size parameter you enter in this section.

The Adjusted Size parameter can be set as either a scaling factor for your size or as an absolute value. Depending on the weight map value your particle size will then interpolate between Size and Adjusted Size.

If you plug a weightmap into the Erase Map port then darts will only land on areas with a weightmap value on or above the threshold. All other darts will be deleted.


A single instance whose size is randomised using a variance. Version 3 supports negative variance.

The randomise option enables you to add a variance to your sizes either as an absolute value or a scaled value. Randomise assumes a variance around 0 so you'll get both negative and positive variance. This is where setting 'min size' (see below) can help ensure your smallest particles are not pushed into negative territory if you use the scaled option.


Iteration Abort Threshold
This parameter lets you specify what fraction of the total iterations have to occur without a successful dart before the node automatically stops iterating. For example if you have set 1,000,000 iterations and set this parameter to 0.1 it means that if there are 100,000 iterations without a successful dart the node will stop iterating.

Min Size
Now that the randomise feature (above) supports both absolute and scaled variances it's possible, using scaled, to scale your particles by a negative amount i.e. if your variance is set to, say, 0.5 then you will get scaling values between -0.5 and +0.5. Min Size simply sets a minimum for your particles. (If you use absolute in this scenario it means you can subtract negative values from your particle size which won't necessarily push the size into negative territory).


Position Array
This is the array of successful dart positions.

Size Array/Size Per Point
This is the array/per point output of sizes associated with those output positions. Be aware that the size output needs to be treated differently depending on whether you're using intrinsic ice objects like sphere, cube, cone etc. or geometry instances. In the case of intrinsic ice objects you should use this output directly to control the size. In the case of instances you need to divide the output size by the input size to work out a scaling factor for your instances. See the sample scenes for examples.

Size Index Array/Size Index Per Point
If you're using an input array of sizes this is either an array or per point output specifying the index into the input size array for each position. For example, your input size array will match the sequence of sizes in your instance group. When Dart Throw allocates a size to a point you will need to know for each point which size it has selected. This output tells you the index of that size so that you can pick the correct instance from the instance group.

Friday, 21 June 2013

Texture UV To Location

I was motivated by this thread on the mailing list (and Gustavo's excellent Motion Tools) to work on a C++ ice node to provide a quick way of finding positions from an input texture uv array (the factory-installed UV to Location node doesn't work on polygonal geometry). The method doesn't require triangulated geometry.

Since we don't have access to pointlocators in the ICE SDK,  the node doesn't actually get locations directly but it does generate a position on the polygonal surface from an input UV. The custom ice node is built into a compound that then takes the output positions and generates locations using Get Closest Location. You have the option of using the position directly or using the location port.

The addon is below for 2013 SP1 (64bit only) and contains C++ source code with comments on the barycentric and triangulation methods as well as notes on some of the problems/choices related to building a custom ice node. A sample scene is also included. If you find a circumstance where it doesn't work correctly I'd be very interested in the scene file.

Updated 28 June 2013:
Crash when no geometry under UV fixed.
Texture UV To Location v1.1

Wednesday, 9 January 2013

Place Specular Highlights

Updated 22nd January. The fifth beta (v0.9b) of a  tool using the Custom Tool SDK to interactively place specular highlights on objects is now available. The (updated for v0.9b) movie here shows what the tool can do.

The tool allows the  user to select any pre-existing light, lightroot or object in the scene and that object will be manipulated. If no light or object is selected a spotlight will be created. This means existing VRay and Arnold lights are now supported.

The main features of v0.9b are now:
  • Place specular highlights from lights directly where you position cursor over an object.
  • Place any object, not just lights, along the reflection vector.
  • Place multiple objects simultaneously.
  • Original distance(s) to incident point is/are retained.
  • Distance to point can be manipulated with the Shift key.
  • The cursor can be placed back on the 'specular' point by holding down Shift + CTRL.
v0.9b also introduced some significant performance enhancements to the underlying pick routines.

Friday, 21 December 2012

Convex Hull Using CGAL Library

Although Guillaume Laforge has posted the definitive Convex Hull node for ICE, I thought it would be an interesting exercise to try and hook into the CGAL geometry library and see if it was possible to use their Convex Hull (Qhull) algorithm in an ICE node. It turned out to be relatively straightforward. The Addon and C++ source is here (2013SP1, 64bit only. Only tested on a couple of machines, let me know if there are any floating dependencies I may have missed).

Sunday, 18 November 2012

More Even Spacing

Softimage 2012 gave us the ability to create ICE Attributes directly via scripting and populate them with data. Guillaume Laforge used this ability extensively in CrowdFX and I've recently been using it as a mechanism to easily store large datasets in place of Blobs (e.g. for storing animated curve data from Flame GMasks).

As a simple example, I created a script which takes an input curve and creates an ICE Attribute on that curve containing evenly-spaced point positions. The relationship is 'live' so you can manipulate the curve and alter the number of evenly spaced 'ICE' points. You could then go on to feed that ICE data into another ICE tree.

In the same archive I've also included a script to generate a 'real' curve with a live link to the original curve - the new curve has evenly spaced points* and can be any degree you choose and/or constrained to the original. (*This even spacing becomes more accurate the higher your resolution).

The archive is here.

Using Generate Sample Set to Avoid Repeat Loops

Oleg Bliznuk, the author of Exocortex's Implosia FX, posted a tip for avoiding repeat loops a few months back using Generate Sample Set. It's a great tip and one which can generate good time savings over repeat loops. I wanted to test just how much of a saving the tip could provide by calculating a cumulative sum array i.e. given an array of integers, produce another array which gave you the cumulative sum of all elements in the array at any given point.

The conventional way to do that would be to use a simple repeat loop, but as is always the case with ICE, repeat loops are not necessarily the best way to achieve your desired results as ICE's multithreading isn't optimised in that scenario.

Oleg's brilliantly lateral idea was to generate a sample for each array member and create an array on that sample the same size as the element's index. You can then populate that array with all the members of your original array up to that point and perform tasks on that segment of the original array.

I decided to set up a test scene to compare the performance of a conventional repeat loop and Oleg's  method. In that test scene, Oleg's method was twice as fast as a repeat loop which is a significant gain.

But, there is a downside -  you pay in RAM usage, sometimes to the point where you might end up paging memory. In my sample scene, the repeat loop method used 200Mb RAM for an array of size 50k whereas in the Generate Sample Set scene RAM  usage shot up to nearly 10 Gigabytes!  The stepping in terms of RAM usage relative to array size (on my machine) went: 5,000: 373Mb; 10,000: 849Mb; 20,000: 2.6Gig; 30,000: 5.5Gig and so on. It looks like there's a step in RAM usage between 10,000 and 20,000 although I'm guessing this might be machine dependent.

It's clear that if you're going to use this method in a compound you need to be careful about the maximum array size you're going to allow and possibly switch over to a conventional repeat loop over a certain threshold.

If you'd like to play with the scene or do your own timings to verify these results it can be downloaded here along with the Cumulative Array Sum compound (2013 SP1).

Monday, 30 January 2012

ICE Modeled Camera Grid

I was way too slow to post this compound to the XSI Mailing list in response to a thread on the creation of a camera plane but here it is anyway! It dynamically creates an ICE modeled grid and places it inside the camera frustum at a specified distance from the camera. At the same time it creates an ICE-based texture projection. Sample scene and compound is here.

Filtering Arrays

Stephen Blair has been posting some great articles on the eX-SI Support Blog about how to create array patterns i.e ordered sequences like (0,1,2,3,0,1,2,3) or (0,0,0,1,1,1,2,2,2). The creation of this type of sequence can be useful in creating  the points of a grid and also for avoiding the use of repeat loops in certain circumstances.

On the XSI mailing list a few weeks back Dan Yargici posted a question about how to reconfigure a simple non-regular pattern like (1,1,5,5,5,5,8,9,9,9,10,10) into (1,1,2,2,2,2,3,4,4,4,5,5). In the resultant thread Martin Chatterjee came back with a brilliant solution without using repeats. Martin's solution touched on some of the inherent functionality in ICE arrays that's worth expanding upon.

I'm going to try and illustrate some of these with a sample scene that contains an ICE tree that shows several different methods for firstly creating a pattern array and then using that pattern to manipulate an array. If that all sounds abstract it's based on Dan's problem above but with the added wrinkle that the initial pattern is not numerically ascending i.e. it looks something like this: (8,8,2,2,2,14,3,3,1,16,11,11).

Sunday, 25 July 2010

Dart Throwing with Weight Maps

3rd July 2013: Please note: the latest, updated,  version of Dart Throw is here.

I've just finished a new version of Dart Throwing with weight map control for spacing (spacing maps) and point deletion (erase maps). It uses it's own internal scheme for computing the barycentric weights from weight map values at vertices. Spacing maps allow you to modulate the density of the points using the weight map as an interpolant between the spacing radius and max spacing radius. Erase maps have a threshold parameter which lets you control at what point between 0 and 1 on the weight map the point should be deleted.

There's also a new 'Iteration Abort' parameter which specifies the number of tries (as a normalised percentage of the total iterations) the plugin should make before it aborts. This is for scenarios where you set a huge number of iterations but the number of successful darts becomes very low i.e. if you set 1,000,000 iterations and you only get an additional 1 dart added in the last 500,000 'tries' then you can see that aborting after 500,000 might be handy. The message window gets logged with the maximum number of iterations for a single try as a reference.

Changes in the way multiphase/element generator plugins work in the SDK mean that this version is only available for the very latest version of Autodesk Softimage - 2011 SP1. The Addon has been compiled for both 32 and 64bit Windows.

Download Dart Throw v002 Addon

A Vimeo clip is here.

Saturday, 17 July 2010

Context Switching Using Set Nodes and Filter

In the roll object compound below I use the Set nodes to move between point context and object context. Combined with filter, it's a powerful way to access an individual component in a specific context and make it available to all other components in that context.

On the list here (and on several occasions before), Ciaran Moloney has suggested a technique for using Set nodes, Filter and the Repeat node for gathering data in one context and turning it into an object context array. It's a great technique and one which deserves a closer look.

Whenever you use a specific context node e.g. point, node, polygon in the branch of an ICE Tree its context takes precedence over any object context nodes in the same branch. As soon as you try and set any data in that branch it will always be in the more granular context. Using the Get Set xx nodes, however, acts as a switch to the context of the branch and turns it back into object context. Put simply if you start off in 'polygon position' context you can finish the branch with a Get xx in Set node which provides you with a single piece of object context data e.g. maximum in set.

Ciaran's trick involves using this switching ability to continually iterate over each point, polygon or node and isolate a single item each time - pushing that item onto an object context array.

In the tree above I construct an object context array of all the node positions. You can see that the tree starts in node context but then each node has a repeat loop iterating over itself matching it's own element index with the list of all node element indexes. When it finds a match the output is a single node still in node context. Pumping this into Get Maximum in Set - a filtered set of one - simply switches the output to object context and the node position gets pushed onto an object context array.

The downside is obviously iteration time as each node has to iterate NbNodes times over itself and with large numbers of nodes, polygons etc. this could be slow. However, it seems like the only surefire way to construct robust object context arrays of node positions, polygon positions etc.