Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update v2.4.0 kh22 rf #2

Open
wants to merge 345 commits into
base: master
Choose a base branch
from
Open

Update v2.4.0 kh22 rf #2

wants to merge 345 commits into from

Conversation

uZer
Copy link
Member

@uZer uZer commented Aug 23, 2023

No description provided.

avoid taking a lock for time lookup, just use the time acquired below
fix mostly compiler warnings, the log level check was incorrect although
not an error in valid case
a cleanup was not right with the global http-headers. The previous routine
specified referred to the member but the replacement missed out the
reference.
any client that already has a respcode is not taking any conditional
action based on the response from the auth server.
use the aux_data member instead of the refbuf to pass the control structure
for intro content building in listener_add. Functionally the same, it just
avoids the overlay aspect. uses something more temporary.

Actually set a default prefix for processing client headers beforehand
allow for a empty name to used for appending on the end of the items.
Needs extra work to be complete though
rare setup but can occur. make auth create the lock and set the count as
a minimum in initialising part as the auth_release expects it
To have a cleaner exit path for request going via curl handles, use the
callback routines (used for progress reports) to check the status of the
servers running state. If it is shutting down then do not immediately abort
those transfers, but do after a number of seconds have elapsed.

While an immediate abort would stop quicker with in-flight requests, by
allowing a certain duration to elapse means allowances for tracking state
exterally is kept.
This pushes the #ifdef into the module for the get auth part.
first is to use the aux_data to store the useagent for the mount_add/remove
triggers instead of the shared_data. There is no parser in those cases so
you cannot use that, and it was used pre-aux_data so use that as it is a
better fit for now. Could become a struct if more needed later. url auth
was the only auth to utilise this.

We used aux_data for holding the name for the command but would leak in
certain cases. The command is copied because it could of been aliased and
a delayed auth needs to reference it later in the auth thread.  In failure
cases the default send routines no nothing about the content of aux_data so
cannot touch it so would leak.  This instead pushes a reserved taken into
parser __admin_cnd and uses that for storage which is cleaned up
automatically during close.
The http part is essentially the same, the difference is in relation to
dividers and end of pariing characters.  The http complete also explicitly
adds a blank line because the complete routine does not add the end of
paring characters to the last entry.  This waym for http you get the blank
with nothing following it but the previous has a \r\n, in the POST case
there is nothing added on the end so no & is at the end.

The POST variants don't refer to a client refbuf, instead it returns one to
allow the caller to decide what to do. auth url does not use a client for
example. YP does not either.
The http work allocates the block at the complete stage and inserts at the
head of the list for the client. This means that the original requirments
for block processing at the start has changed.  Before we had a block ready
to go when entering the request routines but now that is now dropped.

One to watch though is on incoming streams from sources. They can send
stream data after the connetion so you need to keep that around to avoid
any misalignment problems.
avoid a sync issue with client flags and make sure the worker is woken
up so cleanup can occur.
a chunk of this is the conversion over to using the http work, which builds
the post fields which are passed into the libcurl handle.

The callback routines for headers and data were not that great. Most of it
stems from the problem that headers are not required to be nul-terminated,
but it's not clear on whether the buffer passed is really changeable. So
to be clearer on this, and to make processing more in-line with strings
with nuls, these routines were re-worked to do the same sort of thing be
to make sure there is not gotchas
when the relay is established, set the recheck hosts initial time else
it will retry immediately.  Most likely not something of concern just
some extra processing work at the beginning but the feed may sound like
a small burst of audio at switchover if the higher priority feed just
drops for a moment.
allow a larger range of relays to start up. not excessive but allows more
in case many are configured.

introduce a start member for relay so that we can process a the startup
routine without kicking off a relay early. This is targetted at relays
that are offline but we may need to do internal housekeeping. not used
much for now.
This is to specify the number of seconds a terminating stream stays active
for before it finally shuts down. This has a few possibilities for later
commits but if a relay terminates, you may want to delay the reconnection
by a short time to allow the sources to reconnect upstream.  The basic
idea is the the stream put into a state where no further stream reading is
performed but all other actions are.

The main change in this commit is the splitting of the source_read routine
into 2, one being a wrapper for catching the returns from the internal one

The linger facility will also be able to interact with the switchover
mechanism.
Next update on the routines. The mechanisms for switchover follow a pattern
of set the SOURCE_SWITCHOVER and set aux_data to refer to the new client
then wait for the original client to swap it over. The odd routine here is
the relay_reactivated routine which if it finds a relay for the source that
is terminating then there will be no relay client. In this case, we set
can set source client as there is no race between us, the listeners and the
another client.

The switchover routines for the relay client alway proceed to client free
up but it is just a matter of whether the relay is freed (priority host
switchover) or not (eg source override of relay).

With the linger-for setting, we can now have a source linger for a time
after termination and subsequent connection by a source client that
subsequent connection is treated as a switchover status and jumps in
meaning another mechanism for changing sources without a fallback

The previous update for switchover did not handle inactive on-demand
relays so now a source can override an on-demand relay and drop back
when finished

The linger-for setting does apply even in a multi-host relay. This is
something that may need addressing later.
karlheyes and others added 30 commits May 28, 2023 01:45
This may be affecting the safari connections.
here we keep a start and end position in the connection_t and use that to cause
a stream stop.  we take into account the difference on files and streams, and
if there is anything to skip over at the beginning of a file.

The oddball here is safari with its multiple requests to assess things, but
hopefully that can be tightened up to sort it.
In cases with many quickly rescheduling clients, the limiter was far too high
to trigger a reset.
While the connection error is often used to mark the end of a stream in a few
cases we may want to stop the current stream and restart another on the same
connection, these markers allow for that.
It still leaves the infamous double bandwidth stream bug, but should play at
least
a lock was missing on a file handle. This could lead to an abort or hang
depending on the underlying locking implementation.
some small re-arrangement on initial byte assessment, not a significant issue
but is cleaner.
traditionally, this is generated by icecast based on the specified hostname
and other settings.  While this works well, some may want to be explicit in
what is provided to any directories

Note that directories will still do checks so if the provided address does
not match the incoming request then it can be rejected, subject to any
policies the directory uses
safari is a problem child really, but use a redirect trick to add a query param
to mitigate caching effects.  Try out forcing any aacp content type to aac but
still allow the _hdr setting to override.

karl
This was out of sight for some time but the main components were correct. The
memory block allocated has been made slightly larger but now can be expanded
on the rare cases there is a need.
…arlheyes#242

Use the css aspects to use the in-built browser use which allows us to avoid
the extra requests to get the images.

The other part of the pull is the multi-user shoutcast compatible handling. That
already exists using a similar type of layout but whether there is something
different in the implementation will need to assessed.  The web page changes
though are fine.

karl.
Some metrics in the xls we imported were hidden:

- icestats/sources
- icestats/clients
- icestats/listeners

This choice wasn't ours: we just imported a remote xsl file from
the Internet. Today we want to be able to graph these metrics to
we unhide them here.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants